25 resultados para Batch Proof, Verification of Re-encryption, Verification of Decryption, Mix Network

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce labelled sequent calculi for indexed modal logics. We prove that the structural rules of weakening and contraction are height-preserving admissible, that all rules are invertible, and that cut is admissible. Then we prove that each calculus introduced is sound and complete with respect to the appropriate class of transition frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bacterial small regulatory RNAs (sRNAs) are posttranscriptional regulators involved in stress responses. These short non-coding transcripts are synthesised in response to a signal, and control gene expression of their regulons by modulating the translation or stability of the target mRNAs, often in concert with the RNA chaperone Hfq. Characterization of a Hfq knock out mutant in Neisseria meningitidis revealed that it has a pleiotropic phenotype, suggesting a major role for Hfq in adaptation to stresses and virulence and the presence of Hfq-dependent sRNA activity. Global gene expression analysis of regulated transcripts in the Hfq mutant revealed the presence of a regulated sRNA, incorrectly annotated as an open reading frame, which we renamed AniS. The synthesis of this novel sRNA is anaerobically induced through activation of its promoter by the FNR global regulator and through global gene expression analyses we identified at least two predicted mRNA targets of AniS. We also performed a detailed molecular analysis of the action of the sRNA NrrF,. We demonstrated that NrrF regulates succinate dehydrogenase by forming a duplex with a region of complementarity within the sdhDA region of the succinate dehydrogenase transcript, and Hfq enhances the binding of this sRNA to the identified target in the sdhCDAB mRNA; this is likely to result in rapid turnover of the transcript in vivo. In addition, in order to globally investigate other possible sRNAs of N. meningitdis we Deep-sequenced the transcriptome of this bacterium under both standard in vitro and iron-depleted conditions. This analysis revealed genes that were actively transcribed under the two conditions. We focused our attention on the transcribed non-coding regions of the genome and, along with 5’ and 3’ untranslated regions, 19 novel candidate sRNAs were identified. Further studies will be focused on the identification of the regulatory networks of these sRNAs, and their targets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Jean Monnet, possibly the most important actor during the first post-war decades of European integration, is constantly described in the literature as part of a network that included several influential individuals in Europe and in the United States who, at different moments, held key positions. An important aspect in this regard is that some of Monnet’s transatlantic friends promoted European integration and contributed to a cross-fertilization process across the Atlantic. Considering that most of the authors either list a number of people as being part of this network, or focus on particular individuals’ relationship with Monnet, it is fair to ask to what extent his network helped him in pursuing his goals, if Monnet was simply accepted, and why, in already existing networks, if we can consider his as a transatlantic working group and if we can retrace in this story elements of continuity and long durée that can contribute to the historiography of early European Integration. Considering new trends and interpretations that highlight the role played by networks, examination of Monnet’s techniques and his reliance on his transatlantic connections reveal important findings about his relationship with policymakers, shading also a light on important features of XX century diplomatic and transatlantic history. This dissertation’s attempt, therefore, is to define these as elements of continuity throughout the formative years of one of founding fathers of the Integration process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concepts of circular economy and sustainability are the basis of the present experimental research that seeks to reduce the environmental impact of traditional road construction materials. This study mainly focuses on the development and the chemo-mechanical characterization of bitumen extenders containing rubber (R) from end-of-life tyres (ELTs) and re-refined engine oil bottoms (REOBs) for the production of innovative and eco-friendly extended bitumens (i.e. bituminous binders containing 25%wt. of recycled products) and asphalt mixtures. In order to create more sustainable asphalt mixes, also recycled aggregates are used for partial replacement of virgin natural aggregates in the aggregate skeleton. The experimental program encompassed five successive steps: (i) the evaluation of physicochemical properties of R and REOB, (ii) the definition of the optimal extenders by the development of a new protocol and their characterizations, (iii) the realization and investigation of the chemo-rheological responses of the extended bitumens at different boundary conditions, (iv) the assessment of the effectiveness of analytical method to predict the rheological parameters of extended bitumens and, finally, (v) the analysis of the mechanical performances of the corresponding asphalt mixtures. A standard 50/70 penetration grade bitumen was chosen as a reference material and the main constituent of the innovative bituminous products. The results of this study underlined the importance of material characterization. The incorporation of R-REOB extenders strongly affects the chemo-rheological responses of the resulting extended bitumens and asphalt mixtures overall the boundary conditions. While the presence of R and the consequent formation of a polymer network improves the elasticity of the final products, especially at high test temperatures; the addition of REOB, softens the bituminous binders and asphalt mixes increasing their response at low test temperatures. Nonetheless, the use of recycled products increased the susceptibility of bituminous material under damaging conditions, which would need further investigations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Selective oxidation is one of the simplest functionalization methods and essentially all monomers used in manufacturing artificial fibers and plastics are obtained by catalytic oxidation processes. Formally, oxidation is considered as an increase in the oxidation number of the carbon atoms, then reactions such as dehydrogenation, ammoxidation, cyclization or chlorination are all oxidation reactions. In this field, most of processes for the synthesis of important chemicals used vanadium oxide-based catalysts. These catalytic systems are used either in the form of multicomponent mixed oxides and oxysalts, e.g., in the oxidation of n-butane (V/P/O) and of benzene (supported V/Mo/O) to maleic anhydride, or in the form of supported metal oxide, e.g., in the manufacture of phthalic anhydride by o-xylene oxidation, of sulphuric acid by oxidation of SO2, in the reduction of NOx with ammonia and in the ammoxidation of alkyl aromatics. In addition, supported vanadia catalysts have also been investigated for the oxidative dehydrogenation of alkanes to olefins , oxidation of pentane to maleic anhydride and the selective oxidation of methanol to formaldehyde or methyl formate [1]. During my PhD I focused my work on two gas phase selective oxidation reactions. The work was done at the Department of Industrial Chemistry and Materials (University of Bologna) in collaboration with Polynt SpA. Polynt is a leader company in the development, production and marketing of catalysts for gas-phase oxidation. In particular, I studied the catalytic system for n-butane oxidation to maleic anhydride (fluid bed technology) and for o-xylene oxidation to phthalic anhydride. Both reactions are catalyzed by systems based on vanadium, but catalysts are completely different. Part A is dedicated to the study of V/P/O catalyst for n-butane selective oxidation, while in the Part B the results of an investigation on TiO2-supported V2O5, catalyst for o-xylene oxidation are showed. In Part A, a general introduction about the importance of maleic anhydride, its uses, the industrial processes and the catalytic system are reported. The reaction is the only industrial direct oxidation of paraffins to a chemical intermediate. It is produced by n-butane oxidation either using fixed bed and fluid bed technology; in both cases the catalyst is the vanadyl pyrophosphate (VPP). Notwithstanding the good performances, the yield value didn’t exceed 60% and the system is continuously studied to improve activity and selectivity. The main open problem is the understanding of the real active phase working under reaction conditions. Several articles deal with the role of different crystalline and/or amorphous vanadium/phosphorous (VPO) compounds. In all cases, bulk VPP is assumed to constitute the core of the active phase, while two different hypotheses have been formulated concerning the catalytic surface. In one case the development of surface amorphous layers that play a direct role in the reaction is described, in the second case specific planes of crystalline VPP are assumed to contribute to the reaction pattern, and the redox process occurs reversibly between VPP and VOPO4. Both hypotheses are supported also by in-situ characterization techniques, but the experiments were performed with different catalysts and probably under slightly different working conditions. Due to complexity of the system, these differences could be the cause of the contradictions present in literature. Supposing that a key role could be played by P/V ratio, I prepared, characterized and tested two samples with different P/V ratio. Transformation occurring on catalytic surfaces under different conditions of temperature and gas-phase composition were studied by means of in-situ Raman spectroscopy, trying to investigate the changes that VPP undergoes during reaction. The goal is to understand which kind of compound constituting the catalyst surface is the most active and selective for butane oxidation reaction, and also which features the catalyst should possess to ensure the development of this surface (e.g. catalyst composition). On the basis of results from this study, it could be possible to project a new catalyst more active and selective with respect to the present ones. In fact, the second topic investigated is the possibility to reproduce the surface active layer of VPP onto a support. In general, supportation is a way to improve mechanical features of the catalysts and to overcome problems such as possible development of local hot spot temperatures, which could cause a decrease of selectivity at high conversion, and high costs of catalyst. In literature it is possible to find different works dealing with the development of supported catalysts, but in general intrinsic characteristics of VPP are worsened due to the chemical interaction between active phase and support. Moreover all these works deal with the supportation of VPP; on the contrary, my work is an attempt to build-up a V/P/O active layer on the surface of a zirconia support by thermal treatment of a precursor obtained by impregnation of a V5+ salt and of H3PO4. In-situ Raman analysis during the thermal treatment, as well as reactivity tests are used to investigate the parameters that may influence the generation of the active phase. Part B is devoted to the study of o-xylene oxidation of phthalic anhydride; industrially, the reaction is carried out in gas-phase using as catalysts a supported system formed by V2O5 on TiO2. The V/Ti/O system is quite complex; different vanadium species could be present on the titania surface, as a function of the vanadium content and of the titania surface area: (i) V species which is chemically bound to the support via oxo bridges (isolated V in octahedral or tetrahedral coordination, depending on the hydration degree), (ii) a polymeric species spread over titania, and (iii) bulk vanadium oxide, either amorphous or crystalline. The different species could have different catalytic properties therefore changing the relative amount of V species can be a way to optimize the catalytic performances of the system. For this reason, samples containing increasing amount of vanadium were prepared and tested in the oxidation of o-xylene, with the aim of find a correlations between V/Ti/O catalytic activity and the amount of the different vanadium species. The second part deals with the role of a gas-phase promoter. Catalytic surface can change under working conditions; the high temperatures and a different gas-phase composition could have an effect also on the formation of different V species. Furthermore, in the industrial practice, the vanadium oxide-based catalysts need the addition of gas-phase promoters in the feed stream, that although do not have a direct role in the reaction stoichiometry, when present leads to considerable improvement of catalytic performance. Starting point of my investigation is the possibility that steam, a component always present in oxidation reactions environment, could cause changes in the nature of catalytic surface under reaction conditions. For this reason, the dynamic phenomena occurring at the surface of a 7wt% V2O5 on TiO2 catalyst in the presence of steam is investigated by means of Raman spectroscopy. Moreover a correlation between the amount of the different vanadium species and catalytic performances have been searched. Finally, the role of dopants has been studied. The industrial V/Ti/O system contains several dopants; the nature and the relative amount of promoters may vary depending on catalyst supplier and on the technology employed for the process, either a single-bed or a multi-layer catalytic fixed-bed. Promoters have a quite remarkable effect on both activity and selectivity to phthalic anhydride. Their role is crucial, and the proper control of the relative amount of each component is fundamental for the process performance. Furthermore, it can not be excluded that the same promoter may play different role depending on reaction conditions (T, composition of gas phase..). The reaction network of phthalic anhydride formation is very complex and includes several parallel and consecutive reactions; for this reason a proper understanding of the role of each dopant cannot be separated from the analysis of the reaction scheme. One of the most important promoters at industrial level, which is always present in the catalytic formulations is Cs. It is known that Cs plays an important role on selectivity to phthalic anhydride, but the reasons of this phenomenon are not really clear. Therefore the effect of Cs on the reaction scheme has been investigated at two different temperature with the aim of evidencing in which step of the reaction network this promoter plays its role.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis work is to develop a computational method based on machine learning techniques for predicting disulfide-bonding states of cysteine residues in proteins, which is a sub-problem of a bigger and yet unsolved problem of protein structure prediction. Improvement in the prediction of disulfide bonding states of cysteine residues will help in putting a constraint in the three dimensional (3D) space of the respective protein structure, and thus will eventually help in the prediction of 3D structure of proteins. Results of this work will have direct implications in site-directed mutational studies of proteins, proteins engineering and the problem of protein folding. We have used a combination of Artificial Neural Network (ANN) and Hidden Markov Model (HMM), the so-called Hidden Neural Network (HNN) as a machine learning technique to develop our prediction method. By using different global and local features of proteins (specifically profiles, parity of cysteine residues, average cysteine conservation, correlated mutation, sub-cellular localization, and signal peptide) as inputs and considering Eukaryotes and Prokaryotes separately we have reached to a remarkable accuracy of 94% on cysteine basis for both Eukaryotic and Prokaryotic datasets, and an accuracy of 90% and 93% on protein basis for Eukaryotic dataset and Prokaryotic dataset respectively. These accuracies are best so far ever reached by any existing prediction methods, and thus our prediction method has outperformed all the previously developed approaches and therefore is more reliable. Most interesting part of this thesis work is the differences in the prediction performances of Eukaryotes and Prokaryotes at the basic level of input coding when ‘profile’ information was given as input to our prediction method. And one of the reasons for this we discover is the difference in the amino acid composition of the local environment of bonded and free cysteine residues in Eukaryotes and Prokaryotes. Eukaryotic bonded cysteine examples have a ‘symmetric-cysteine-rich’ environment, where as Prokaryotic bonded examples lack it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to handle Natural disasters, emergency areas are often individuated over the territory, close to populated centres. In these areas, rescue services are located which respond with resources and materials for population relief. A method of automatic positioning of these centres in case of a flood or an earthquake is presented. The positioning procedure consists of two distinct parts developed by the research group of Prof Michael G. H. Bell of Imperial College, London, refined and applied to real cases at the University of Bologna under the coordination of Prof Ezio Todini. There are certain requirements that need to be observed such as the maximum number of rescue points as well as the number of people involved. Initially, the candidate points are decided according to the ones proposed by the local civil protection services. We then calculate all possible routes from each candidate rescue point to all other points, generally using the concept of the "hyperpath", namely a set of paths each one of which may be optimal. The attributes of the road network are of fundamental importance, both for the calculation of the ideal distance and eventual delays due to the event measured in travel time units. In a second phase, the distances are used to decide the optimum rescue point positions using heuristics. This second part functions by "elimination". In the beginning, all points are considered rescue centres. During every interaction we wish to delete one point and calculate the impact it creates. In each case, we delete the point that creates less impact until we reach the number of rescue centres we wish to keep.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The last half-century has seen a continuing population and consumption growth, increasing the competition for land, water and energy. The solution can be found in the new sustainability theories, such as the industrial symbiosis and the zero waste objective. Reducing, reusing and recycling are challenges that the whole world have to consider. This is especially important for organic waste, whose reusing gives interesting results in terms of energy release. Before reusing, organic waste needs a deeper characterization. The non-destructive and non-invasive features of both Nuclear Magnetic Resonance (NMR) relaxometry and imaging (MRI) make them optimal candidates to reach such characterization. In this research, NMR techniques demonstrated to be innovative technologies, but an important work on the hardware and software of the NMR LAGIRN laboratory was initially done, creating new experimental procedures to analyse organic waste samples. The first results came from soil-organic matter interactions. Remediated soils properties were described in function of the organic carbon content, proving the importance of limiting the addition of further organic matter to not inhibit soil processes as nutrients transport. Moreover NMR relaxation times and the signal amplitude of a compost sample, over time, showed that the organic matter degradation of compost is a complex process that involves a number of degradation kinetics, as a function of the mix of waste. Local degradation processes were studied with enhanced quantitative relaxation technique that combines NMR and MRI. The development of this research has finally led to the study of waste before it becomes waste. Since a lot of food is lost when it is still edible, new NMR experiments studied the efficiency of conservation and valorisation processes: apple dehydration, meat preservation and bio-oils production. All these results proved the readiness of NMR for quality controls on a huge kind of organic residues and waste.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis regards the study and the development of new cognitive assessment and rehabilitation techniques of subjects with traumatic brain injury (TBI). In particular, this thesis i) provides an overview about the state of art of this new assessment and rehabilitation technologies, ii) suggests new methods for the assessment and rehabilitation and iii) contributes to the explanation of the neurophysiological mechanism that is involved in a rehabilitation treatment. Some chapters provide useful information to contextualize TBI and its outcome; they describe the methods used for its assessment/rehabilitation. The other chapters illustrate a series of experimental studies conducted in healthy subjects and TBI patients that suggest new approaches to assessment and rehabilitation. The new proposed approaches have in common the use of electroencefalografy (EEG). EEG was used in all the experimental studies with a different purpose, such as diagnostic tool, signal to command a BCI-system, outcome measure to evaluate the effects of a treatment, etc. The main achieved results are about: i) the study and the development of a system for the communication with patients with disorders of consciousness. It was possible to identify a paradigm of reliable activation during two imagery task using EEG signal or EEG and NIRS signal; ii) the study of the effects of a neuromodulation technique (tDCS) on EEG pattern. This topic is of great importance and interest. The emerged founding showed that the tDCS can manipulate the cortical network activity and through the research of optimal stimulation parameters, it is possible move the working point of a neural network and bring it in a condition of maximum learning. In this way could be possible improved the performance of a BCI system or to improve the efficacy of a rehabilitation treatment, like neurofeedback.