21 resultados para Potentialities
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Il modello afflussi-deflussi e di erosione Kineros2, fisicamente basato, distribuito e a scala di evento, è stato applicato a due bacini idrografici montani della provincia di Bologna (Italia) al fine di testare e valutare il suo funzionamento in ambiente appenninico. Dopo la parametrizzazione dei due bacini, Kineros2 è stato calibrato e validato utilizzando dati sperimentali di portata e di concentrazione dei solidi sospesi, collezionati alla chiusura dei bacini grazie alla presenza di due stazioni di monitoraggio idrotorbidimetrico. La modellazione ha consentito di valutare la capacità del modello di riprodurre correttamente le dinamiche idrologiche osservate, nonchè di trarre conclusioni sulle sue potenzialità e limitazioni.
Resumo:
Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
Resumo:
Aging is a physiological process characterized by a progressive decline of the “cellular homeostatic reserve”, refereed as the capability to respond suitably to exogenous and endogenous stressful stimuli. Due to their high energetic requests and post-mitotic nature, neurons are peculiarly susceptible to this phenomenon. However, the aged brain maintains a certain level of adaptive capacities and if properly stimulated may warrant a considerable functional recovery. Aim of the present research was to verify the plastic potentialities of the aging brain of rats subjected to two kind of exogenous stimuli: A) the replacement of the standard diet with a ketogenic regimen (the change forces the brain to use ketone bodies (KB) in alternative to glucose to satisfy the energetic needs) and B) a behavioural task able to induce the formation of inhibitory avoidance memory. A) Fifteen male Wistar rats of 19 months of age were divided into three groups (average body weight pair-matched), and fed for 8 weeks with different dietary regimens: i) diet containing 10% medium chain triglycerides (MCT); ii) diet containing 20% MCT; iii) standard commercial chow. Five young (5 months of age) and five old (26-27 months of age) animals fed with the standard diet were used as further controls. The following morphological parameters reflecting synaptic plasticity were evaluated in the stratum moleculare of the hippocampal CA1 region (SM CA1), in the outer molecular layer of the hippocampal dentate gyrus (OML DG), and in the granule cell layer of the cerebellar cortex (GCL-CCx): average area (S), numeric density (Nvs), and surface density (Sv) of synapses, and average volume (V), numeric density (Nvm), and volume density (Vv) of synaptic mitochondria. Moreover, succinic dehydrogenase (SDH) activity was cytochemically determined in Purkinje cells (PC) and V, Nvm, Vv, and cytochemical precipitate area/mitochondrial area (R) of SDH-positive mitochondria were evaluated. In SM CA1, MCT-KDs induced the early appearance of the morphological patterns typical of old animals: higher S and V, and lower Nvs and Nvm. On the contrary, in OML DG, Sv and Vv of MCT-KDs-fed rats were higher (as a result of higher Nvs and Nvm) vs. controls; these modifications are known to improve synaptic function and metabolic supply. The opposite effects of MCT-KDs might reflect the different susceptibility of these brain regions to the aging processes: OML DG is less vulnerable than SM CA1, and the reactivation of ketone bodies uptake and catabolism might occur more efficiently in this region, allowing the exploitation of their peculiar metabolic properties. In GCL-CCx, the results described a new scenario in comparison to that found in the hippocampal formation: 10%MCT-KD induced the early appearance of senescent patterns (decreased Nvs and Nvm; increased V), whereas 20%MCT-KD caused no changes. Since GCL-CCx is more vulnerable to age than DG, and less than CA1, these data further support the hypothesis that MCT-KDs effects in the aging brain critically depend on neuronal vulnerability to age, besides MCT percentage. Regarding PC, it was decided to evaluate only the metabolic effect of the dietetic regimen (20%MCT-KD) characterized by less side effects. KD counteracted age-related decrease in numeric density of SDH-positive mitochondria, and enhanced their energetic efficiency (R was significantly higher in MCT-KD-fed rats vs. all the controls). Since it is well known that Purkinje and dentate gyrus cells are less vulnerable to aging than CA1 neurons, these results corroborate our previous hypothesis. In conclusion, the A) experimental line provides the first evidence that morphological and functional parameters reflecting synaptic plasticity and mitochondrial metabolic competence may be modulated by MCT-KDs in the pre-senescent central nervous system, and that the effects may be heterogeneous in different brain regions. MCT-KDs seem to supply high energy metabolic intermediates and to be beneficial (“anti-aging”) for those neurons that maintain the capability to exploit them. This implies risks but also promising potentialities for the therapeutic use of these diets during aging B) Morphological parameters of synapses and synaptic mitochondria in SM CA1 were investigated in old (26-27 month-old) female Wistar rats following a single trial inhibitory avoidance task. In this memory protocol animals learn to avoid a dark compartment in which they received a mild, inescapable foot-shock. Rats were tested 3 and 6 or 9 hours after the training, divided into good and bad responders according to their performance (retention times above or below 100 s, respectively) and immediately sacrificed. Nvs, S, Sv, Nvm, V, and Vv were evaluated. In the good responder group, the numeric density of synapses and mitochondria was significantly higher and the average mitochondrial volume was significantly smaller 9 hours vs. 6 hours after the training. No significant differences were observed among bad responders. Thus, better performances in passive avoidance memory task are correlated with more efficient plastic remodeling of synaptic contacts and mitochondria in hippocampal CA1. These findings indicate that maintenance of synaptic plastic reactivity during aging is a critical requirement for preserving long-term memory consolidation.
Resumo:
ABSTRACT: The dimension stone qualification through the use of non-destructive tests (NDT) is a relevant research topic for the industrial characterisation of finite products, because the competition of low-costs products can be sustained by an offer of highqualification and a top-guarantee products. The synthesis of potentialities offered by the NDT is the qualification and guarantee similar to the well known agro-industrial PDO, Protected Denomination of Origin. In fact it is possible to guarantee both, the origin and the quality of each stone product element, even through a Factory Production Control on line. A specific disciplinary is needed. A research developed at DICMA-Univ. Bologna in the frame of the “OSMATER” INTERREG project, allowed identifying good correlations between destructive and non-destructive tests for some types of materials from Verbano-Cusio-Ossola region. For example non conventional ultrasonic tests, image analysis parameters, water absorption and other measurements showed to be well correlated with the bending resistance, by relationships varying for each product. In conclusion it has been demonstrated that a nondestructive approach allows reaching several goals, among the most important: 1) the identification of materials; 2) the selection of products; 3) the substitution of DT by NDT. Now it is necessary to move from a research phase to the industrial implementation, as well as to develop new ND technologies focused on specific aims.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
The elusive fiction of J. M. Coetzee is not a work in which you can read fixed ethical stances. I suggest testing the potentialities of a logic based on frames and double binds in Coetzee's novels. A double bind is a dilemma in communication which consists on tho conflicting messages, with the result that you canât successfully respond to neither. Jacques Derrida highlighted the strategic value of a way of thinking based on the double bind (but on frames as well), which enables to escape binary thinking and so it opens an ethical space, where you can make a choice out of a set of fixed rules and take responsibility for it. In Coetzeeâs fiction the author himself can be considered in a double bind, seeing that he is a white South African writer who feels that his âtaskâ canât be as simply as choosing to represent faithfully the violence and the racism of the apartheid or of choosing to give a voice to the oppressed. Good intentions alone do not ensure protection against entering unwittingly into complicity with the dominant discourse, and this is why is important to make the frame in which one is always situated clearly visible and explicit. The logic of the double bind becomes the way in which moral problem are staged in Coetzeeâs fiction as well: the opportunity to give a voice to the oppressed through the same language which co-opted to serve the cause of oppression, a relation with the otherness never completed, or the representability of evil in literature, of the secret and of the paradoxical implications of confession and forgiveness.
Resumo:
Dielectric Elastomers (DE) are incompressible dielectrics which can experience deviatoric (isochoric) finite deformations in response to applied large electric fields. Thanks to the strong electro-mechanical coupling, DE intrinsically offer great potentialities for conceiving novel solid-state mechatronic devices, in particular linear actuators, which are more integrated, lightweight, economic, silent, resilient and disposable than equivalent devices based on traditional technologies. Such systems may have a huge impact in applications where the traditional technology does not allow coping with the limits of weight or encumbrance, and with problems involving interaction with humans or unknown environments. Fields such as medicine, domotic, entertainment, aerospace and transportation may profit. For actuation usage, DE are typically shaped in thin films coated with compliant electrodes on both sides and piled one on the other to form a multilayered DE. DE-based Linear Actuators (DELA) are entirely constituted by polymeric materials and their overall performance is highly influenced by several interacting factors; firstly by the electromechanical properties of the film, secondly by the mechanical properties and geometry of the polymeric frame designed to support the film, and finally by the driving circuits and activation strategies. In the last decade, much effort has been focused in the devolvement of analytical and numerical models that could explain and predict the hyperelastic behavior of different types of DE materials. Nevertheless, at present, the use of DELA is limited. The main reasons are 1) the lack of quantitative and qualitative models of the actuator as a whole system 2) the lack of a simple and reliable design methodology. In this thesis, a new point of view in the study of DELA is presented which takes into account the interaction between the DE film and the film supporting frame. Hyperelastic models of the DE film are reported which are capable of modeling the DE and the compliant electrodes. The supporting frames are analyzed and designed as compliant mechanisms using pseudo-rigid body models and subsequent finite element analysis. A new design methodology is reported which optimize the actuator performances allowing to specifically choose its inherent stiffness. As a particular case, the methodology focuses on the design of constant force actuators. This class of actuators are an example of how the force control could be highly simplified. Three new DE actuator concepts are proposed which highlight the goodness of the proposed method.
Resumo:
Tissue engineering is a discipline that aims at regenerating damaged biological tissues by using a cell-construct engineered in vitro made of cells grown into a porous 3D scaffold. The role of the scaffold is to guide cell growth and differentiation by acting as a bioresorbable temporary substrate that will be eventually replaced by new tissue produced by cells. As a matter or fact, the obtainment of a successful engineered tissue requires a multidisciplinary approach that must integrate the basic principles of biology, engineering and material science. The present Ph.D. thesis aimed at developing and characterizing innovative polymeric bioresorbable scaffolds made of hydrolysable polyesters. The potentialities of both commercial polyesters (i.e. poly-e-caprolactone, polylactide and some lactide copolymers) and of non-commercial polyesters (i.e. poly-w-pentadecalactone and some of its copolymers) were explored and discussed. Two techniques were employed to fabricate scaffolds: supercritical carbon dioxide (scCO2) foaming and electrospinning (ES). The former is a powerful technology that enables to produce 3D microporous foams by avoiding the use of solvents that can be toxic to mammalian cells. The scCO2 process, which is commonly applied to amorphous polymers, was successfully modified to foam a highly crystalline poly(w-pentadecalactone-co-e-caprolactone) copolymer and the effect of process parameters on scaffold morphology and thermo-mechanical properties was investigated. In the course of the present research activity, sub-micrometric fibrous non-woven meshes were produced using ES technology. Electrospun materials are considered highly promising scaffolds because they resemble the 3D organization of native extra cellular matrix. A careful control of process parameters allowed to fabricate defect-free fibres with diameters ranging from hundreds of nanometers to several microns, having either smooth or porous surface. Moreover, versatility of ES technology enabled to produce electrospun scaffolds from different polyesters as well as “composite” non-woven meshes by concomitantly electrospinning different fibres in terms of both fibre morphology and polymer material. The 3D-architecture of the electrospun scaffolds fabricated in this research was controlled in terms of mutual fibre orientation by properly modifying the instrumental apparatus. This aspect is particularly interesting since the micro/nano-architecture of the scaffold is known to affect cell behaviour. Since last generation scaffolds are expected to induce specific cell response, the present research activity also explored the possibility to produce electrospun scaffolds bioactive towards cells. Bio-functionalized substrates were obtained by loading polymer fibres with growth factors (i.e. biomolecules that elicit specific cell behaviour) and it was demonstrated that, despite the high voltages applied during electrospinning, the growth factor retains its biological activity once released from the fibres upon contact with cell culture medium. A second fuctionalization approach aiming, at a final stage, at controlling cell adhesion on electrospun scaffolds, consisted in covering fibre surface with highly hydrophilic polymer brushes of glycerol monomethacrylate synthesized by Atom Transfer Radical Polymerization. Future investigations are going to exploit the hydroxyl groups of the polymer brushes for functionalizing the fibre surface with desired biomolecules. Electrospun scaffolds were employed in cell culture experiments performed in collaboration with biochemical laboratories aimed at evaluating the biocompatibility of new electrospun polymers and at investigating the effect of fibre orientation on cell behaviour. Moreover, at a preliminary stage, electrospun scaffolds were also cultured with tumour mammalian cells for developing in vitro tumour models aimed at better understanding the role of natural ECM on tumour malignity in vivo.
Resumo:
Il tema centrale di questo lavoro è costituito dalle nuove forme di pianificazione territoriale in uso nelle principali città europee, con particolare riferimento all'esperienza della pianificazione strategica applicata al governo del territorio, e dall'analisi approfondita delle politiche e degli strumenti di pianificazione urbanistica e territoriale della città di Bologna, dal Piano Regolatore Generale del 1985-'89 al nuovo Piano Strutturale Comunale del 2008. Più precisamente, le caratteristiche, potenzialità e criticità del nuovo strumento urbanistico del capoluogo emiliano-romagnolo, vengono esaminati, non solo, in rapporto alle caratteristiche tipiche dei piani strategici europei, ma anche alle forme tradizionali della pianificazione urbanistica (i piani regolatori generali) di cui il piano strutturale dovrebbe superare i limiti, sia in termini di efficacia operativa, sia per quanto riguarda la capacità di costruire condivisione e consenso tra i diversi attori urbani, sull'idea di città di cui è portatore. The main topics of this research are the new tools for urban planning used by the main European cities - with particular reference to strategic planning applied to the territorial management - and the analysis of Bologna policies and instruments for urban and territorial planning, from the Piano Regolatore Generale '85-'89, to the Piano Strutturale Comunale in 2008. More precisely, the Bologna new planning instrument's characteristics, potentialities and criticalities, are not only investigated in relation to the fundamental characteristics of European strategic plans, but also to the traditional instruments of Italian urbanistic planning (Piani Regolatori Generali), of which the new structural plan should exceed the limits, both in terms of effectiveness; and in terms of ability to build agreement and sharing on its urban project, between different urban actors.
Resumo:
Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.
Resumo:
La trattazione cerca di delineare i confini teorici ed applicativi dell’istituto dell’interpretazione autentica, nella chiara consapevolezza che dietro tale tematica si celi il più complesso problema di una corretta delimitazione tra attività di legis-latio e attività di legis-executio. Il fenomeno delle leggi interpretative costituisce infatti nodo nevralgico e punto di intersezione di tre ambiti materiali distinti, ossia la teoria dell’interpretazione, la teoria delle fonti del diritto e la dottrina di matrice liberale della separazione dei poteri. All’interno del nostro ordinamento, nell’epoca più recente, si è assistito ad un aumento esponenziale di interventi legislativi interpretativi che, allo stato attuale, sono utilizzati per lo più come strumenti di legislazione ordinaria. Sotto questo profilo, il sempre più frequente ricorso alla fonte interpretativa può essere inquadrato nel più complesso fenomeno della “crisi della legge” i cui tradizionali requisiti di generalità, astrattezza ed irretroattività sono stati progressivamente abbandonati dal legislatore parallelamente con l’affermarsi dello Stato costituzionale. L’abuso dello strumento interpretativo da parte del legislatore, gravemente lesivo delle posizioni giuridiche soggettive, non è stato finora efficacemente contrastato all’interno dell’ordinamento nonostante l’elaborazione da parte della Corte costituzionale di una serie di limiti e requisiti di legittimità dell’esegesi legislativa. In tale prospettiva, diventano quindi di rilevanza fondamentale la ricerca e l’esame di strategie e rimedi, giurisdizionali ed istituzionali, tali da arginare l’“onnipotenza” del legislatore interprete. A seguito dell’analisi svolta, è maturata la consapevolezza delle potenzialità insite nella valorizzazione della giurisprudenza della Corte Edu, maggiormente incline a sanzionare l’abuso delle leggi interpretative.
Resumo:
Los accesorios metálicos de indumentaria constituyen uno de las fuentes materiales principales para aproximarse a la realidad social, cultural y económica de la población del Mediterráneo tardoantiguo. En el caso de los hallazgos de los siglos V y VI procedentes de la Península Ibérica y del suroeste de Francia, numerosos problemas de documentación han impedido extraer y desarrollar todo su potencial, tanto en lo referente al encuadre tipológico y cronológico de estos objetos como en la consiguiente fase interpretativa. Se hacía necesario acometer un nuevo estudio monográfico que actualizara el panorama de la investigación. El trabajo cataloga, data y clasifica tipológicamente más de cuatro millares de fíbulas y accesorios de cinturón recuperados en casi medio millar de yacimientos localizados en los actuales Portugal, España, Andorra y Francia. El resultado permite aproximarse a las áreas de producción y modalidades de circulación y utilización de cada uno de los tipos individualizados. Una veintena de indumentarias distintas, definidas por combinaciones de distintos tipos de accesorios en contextos funerarios, ha sido identificada. Parte de éstas constituye la base principal de un sistema cronológico organizado en seis fases distintas que cubren una cronología situada aproximadamente entre las últimas décadas del siglo IV y las últimas décadas del siglo VI. La investigación acomete asimismo el análisis de la implantación de los accesorios y de las indumentarias relacionadas con ellos en el paisaje tardoantiguo de Hispania y la Galia. El resultado permite reconstruir secuencias regionales de evolución indumentaria y establecer relaciones entre diversas tipologías de contextos funerarios y habitativos y los tipos de indumentaria previamente definidos. Los resultados permiten renovar la mirada sobre este tipo de objetos y el lugar que ocuparon en la vida cotidiana de muchos de los habitantes del regnum visigodo temprano.
Resumo:
Il contesto nazionale è cambiato recentemente per l’introduzione del nuovo Sistema Geodetico coincidente con quello Europeo (ETRS89, frame ETRF00) e realizzato dalle stazioni della Rete Dinamica Nazionale. Sistema geodetico, associato al cartografico UTM_ETRF00, divenuto per decreto obbligatorio nelle Pubbliche Amministrazioni. Questo cambiamento ha consentito di ottenere rilevamenti dei dati cartografici in coordinate assolute ETRF00 molto più accurate. Quando i dati così rilevati vengono utilizzati per aggiornamenti cartografici perdono le coordinate originarie e vengono adattati a particolari cartografici circostanti. Per progettare una modernizzazione delle mappe catastali e delle carte tecniche finalizzata a consentire l’introduzione degli aggiornamenti senza modificarne le coordinate assolute originarie, lo studio è iniziato valutando come utilizzare sviluppi di strutturazione dei dati topografici presenti nel Database Geotopografico, modellizzazioni 3D di fabbricati nelle esperienze catastali INSPIRE, integrazioni in ambito MUDE tra progetti edilizi e loro realizzazioni. Lo studio è proseguito valutando i servizi di posizionamento in tempo reale NRTK presenti in Italia. Inoltre sono state effettuate sperimentazioni per verificare anche in sede locale la precisione e l’affidabilità dei servizi di posizionamento presenti. La criticità della cartografia catastale deriva sostanzialmente dal due fatti: che originariamente fu inquadrata in 850 Sistemi e successivamente fu trasformata in Roma40 con una esigua densità di punti rimisurati; che fino al 1988 fu aggiornata con modalità non rigorose di bassa qualità. Per risolvere tali criticità si è quindi ipotizzato di sfruttare le modalità di rilevamento NRTK per aumentare localmente la densità dei punti rimisurati e reinquadrare le mappe catastali. Il test, realizzato a Bologna, ha comportato un’analisi preliminare per individuare quali Punti Fiduciali considerare coerenti con le specifiche cartografiche per poi utilizzarli e aumentare localmente la densità dei punti rimisurati. La sperimentazione ha consentito la realizzazione del progetto e di inserire quindi i prossimi aggiornamenti senza modificarne le coordinate ETRF00 ottenute dal servizio di posizionamento.
Resumo:
The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.