23 resultados para Text-to-speech systems

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this PhD thesis is the investigation of the photophysical properties of materials that can be exploited in solar energy conversion. In this context, my research was mainly focused on carbon nanotube-based materials and ruthenium complexes. The first part of the thesis is devoted to carbon nanotubes (CNT), which have unique physical and chemical properties, whose rational control is of substantial interest to widen their application perspectives in many fields. Our goals were (i) to develop novel procedures for supramolecular dispersion, using amphiphilic block copolymers, (ii) to investigate the photophysics of CNT-based multicomponent hybrids and understand the nature of photoinduced interactions between CNT and selected molecular systems such as porphyrins, fullerenes and oligo (p-phynylenevinylenes). We established a new protocol for the dispersion of SWCNTs in aqueous media via non-covalent interactions and demonstrated that some CNT-based hybrids are suitable for testing in PV devices. The second part of the work is focussed on the study of homoleptic and heteroleptic Ru(II) complexes with bipyridine and extended phenanthroline ligands. Our studies demonstrated that these compounds are potentially useful as light harvesting systems for solar energy conversion. Both CNT materials and Ru(II) complexes have turned out to be remarkable examples of photoactive systems. The morphological and photophysical characterization of CNT-based multicomponent systems allowed a satisfactory rationalization of the photoinduced interactions between the individual units, despite several hurdles related to the intrinsic properties of CNTs that prevent, for instance, the utilization of laser spectroscopic techniques. Overall, this work may prompt the design and development of new functional materials for photovoltaic devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Starch is the main form in which plants store carbohydrates reserves, both in terms of amounts and distribution among different plant species. Carbohydrates are direct products of photosynthetic activity, and it is well know that yield efficiency and production are directly correlated to the amount of carbohydrates synthesized and how these are distributed among vegetative and reproductive organs. Nowadays, in pear trees, due to the modernization of orchards, through the introduction of new rootstocks and the development of new training systems, the understanding and the development of new approaches regarding the distribution and storage of carbohydrates, are required. The objective of this research work was to study the behavior of carbohydrate reserves, mainly starch, in different pear tree organs and tissues: i.e., fruits, leaves, woody organs, roots and flower buds, at different physiological stages during the season. Starch in fruit is accumulated at early stages, and reached a maximum concentration during the middle phase of fruit development; after that, its degradation begins with a rise in soluble carbohydrates. Moreover, relationships between fruit starch degradation and different fruit traits, soluble sugars and organic acids were established. In woody organs and roots, an interconversion between starch and soluble carbohydrates was observed during the dormancy period that confirms its main function in supporting the growth and development of new tissues during the following spring. Factors as training systems, rootstocks, types of bearing wood, and their position on the canopy, influenced the concentrations of starch and soluble carbohydrates at different sampling dates. Also, environmental conditions and cultural practices must be considered to better explain these results. Thus, a deeper understanding of the dynamics of carbohydrates reserves within the plant could provide relevant information to improve several management practices to increase crop yield efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro è strutturato in quattro parti analizzando e comparando le pubblicazioni del settore scientifico italiano, anglofono e tedesco di riferimento. Nel primo capitolo della tesi viene proposta una riflessione sulle parole che ruotano attorno al tema dei DSA e della disabilità. Nel secondo capitolo vengono presentati, a partire dalla letteratura scientifica di riferimento, gli indicatori di rischio che segnalano possibili disturbi specifici di apprendimento e le caratteristiche di apprendimento dei DSA mettendo in luce potenzialità e talenti spesso intrinseci. Nel terzo capitolo viene vagliata la normativa di riferimento, in particolare la recente Legge 170/2010 e le relative Linee Guida. Nel quarto capitolo, partendo dal tema della diffusione delle tecnologie dell’informazione e della comunicazione (da ora in poi TIC) nel mondo della scuola, sono ampiamente trattati i principali strumenti compensativi (sintesi vocale, libri digitali, mappe concettuali, Lavagna Interattiva Multimediale) e le misure dispensative adottabili. Nel quinto capitolo viene analizzato in tutte le sue parti il Piano Didattico Personalizzato (da ora in poi PDP) e viene proposto un possibile modello di PDP pubblicato sul sito dell'Ufficio per l’Ambito Territoriale di Bologna. Nel sesto capitolo della tesi viene presentato il Progetto Regionale ProDSA. Il Progetto, rivolto a studenti, con diagnosi di DSA, delle scuole secondarie di primo grado e del primo biennio delle secondarie di secondo grado dell’Emilia-Romagna, ha visto, grazie a un finanziamento della Regione, la consegna in comodato d'uso gratuito di tecnologie compensative agli alunni che hanno aderito. La sezione empirica del presente lavoro indaga l’uso reale che è stato fatto degli strumenti proposti in comodato d’uso e le motivazioni legate alla scelta di non utilizzarli in classe. Nel settimo capitolo vengono proposti strumenti progettati per rispondere concretamente alle criticità emerse dall'analisi dei dati e per sensibilizzare il mondo della scuola sulle caratteristiche dei DSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Curry-Howard isomorphism is the idea that proofs in natural deduction can be put in correspondence with lambda terms in such a way that this correspondence is preserved by normalization. The concept can be extended from Intuitionistic Logic to other systems, such as Linear Logic. One of the nice conseguences of this isomorphism is that we can reason about functional programs with formal tools which are typical of proof systems: such analysis can also include quantitative qualities of programs, such as the number of steps it takes to terminate. Another is the possiblity to describe the execution of these programs in terms of abstract machines. In 1990 Griffin proved that the correspondence can be extended to Classical Logic and control operators. That is, Classical Logic adds the possiblity to manipulate continuations. In this thesis we see how the things we described above work in this larger context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular recognition and self-assembly represent fundamental issues for the construction of supramolecular systems, structures in which the components are held together through non-covalent interactions. The study of host-guest complexes and mechanical interlocked molecules, important examples in this field, is necessary in order to characterize self-assembly processes, achieve more control over the molecular organization and develop sophisticated structures by using properly designed building blocks. The introduction of paramagnetic species, or spin labelling, represents an attractive opportunity that allows their detection and characterization by the Electron Spin Resonance spectroscopy, a valuable technique that provides additional information to those obtained by traditional methods. In this Thesis, recent progresses in the design and the synthesis of new paramagnetic host-guest complexes and rotaxanes characterized by the presence of nitroxide radicals and their investigation by ESR spectroscopy are reported. In Chapter 1 a brief overview of the principal concepts of supramolecular chemistry, the spin labelling approach and the development of ESR methods applied to paramagnetic systems are described. Chapter 2 and 3 are focused on the introduction of radicals in macrocycles as Cucurbiturils and Pillar[n]arenes, due to the interesting binding properties and the potential employment in rotaxanes, in order to investigate their structures and recognition properties. Chapter 4 deals with one of the most studied mechanical interlocked molecules, the bistable [2]rotaxane reported by Stoddart and Heath based on the ciclobis (paraquat-p-phenylene) CBPQT4+, that represents a well known example of molecular switch driven by external stimuli. The spin labelling of analogous architectures allows the monitoring by ESR spectroscopy of the switch mechanism involving the ring compound by tuning the spin exchange interaction. Finally, Chapter 5 contains the experimental procedures used for the synthesis of some of the compounds described in Chapter 2-4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chemists have long sought to extrapolate the power of biological catalysis and recognition to synthetic systems. These efforts have focused largely on low molecular weight catalysts and receptors; however, biological systems themselves rely almost exclusively on polymers, proteins and RNA, to perform complex chemical functions. Proteins and RNA are unique in their ability to adopt compact, well-ordered conformations, and specific folding provides precise spatial orientation of the functional groups that comprise the “active site”. These features suggest that identification of new polymer backbones with discrete and predictable folding propensities (“foldamers”) will provide a basis for design of molecular machines with unique capabilities. The foldamer approach complements current efforts to design unnatural properties into polypeptides and polynucleotides. The aim of this thesis is the synthesis and conformational studies of new classes of foldamers, using a peptidomimetic approach. Moreover their attitude to be utilized as ionophores, catalysts, and nanobiomaterials were analyzed in solution and in the solid state. This thesis is divided in thematically chapters that are reported below. It begins with a very general introduction (page 4) which is useful, but not strictly necessary, to the expert reader. It is worth mentioning that paragraph I.3 (page 22) is the starting point of this work and paragraph I.5 (page 32) isrequired to better understand the results of chapters 4 and 5. In chapter 1 (page 39) is reported the synthesis and conformational analysis of a novel class of foldamers containing (S)-β3-homophenylglycine [(S)-β3-hPhg] and D- 4-carboxy-oxazolidin-2-one (D-Oxd) residues in alternate order is reported. The experimental conformational analysis performed in solution by IR, 1HNMR, and CD spectroscopy unambiguously proved that these oligomers fold into ordered structures with increasing sequence length. Theoretical calculations employing ab initio MO theory suggest a helix with 11-membered hydrogenbonded rings as the preferred secondary structure type. The novel structures enrich the field of peptidic foldamers and might be useful in the mimicry of native peptides. In chapter 2 cyclo-(L-Ala-D-Oxd)3 and cyclo-(L-Ala-DOxd) 4 were prepared in the liquid phase with good overall yields and were utilized for bivalent ions chelation (Ca2+, Mg2+, Cu2+, Zn2+ and Hg2+); their chelation skill was analyzed with ESI-MS, CD and 1HNMR techniques and the best results were obtained with cyclo-(L-Ala-D-Oxd)3 and Mg2+ or Ca2+. Chapter 3 describes an application of oligopeptides as catalysts for aldol reactions. Paragraph 3.1 concerns the use of prolinamides as catalysts of the cross aldol addition of hydroxyacetone to aromatic aldeydes, whereas paragraphs 3.2 and 3.3 are about the catalyzed aldol addition of acetone to isatins. By means of DFT and AIM calculations, the steric and stereoelectronic effects that control the enantioselectivity in the cross-aldol addition of acetone to isatin catalysed by L-proline have been studied, also in the presence of small quantities of water. In chapter 4 is reported the synthesis and the analysis of a new fiber-like material, obtained from the selfaggregation of the dipeptide Boc-L-Phe-D-Oxd-OBn, which spontaneously forms uniform fibers consisting of parallel infinite linear chains arising from singleintermolecular N-H···O=C hydrogen bonds. This is the absolute borderline case of a parallel β-sheet structure. Longer oligomers of the same series with general formula Boc-(L-Phe-D-Oxd)n-OBn (where n = 2-5), are described in chapter 5. Their properties in solution and in the solid state were analyzed, in correlation with their attitude to form intramolecular hydrogen bond. In chapter 6 is reported the synthesis of imidazolidin-2- one-4-carboxylate and (tetrahydro)-pyrimidin-2-one-5- carboxylate, via an efficient modification of the Hofmann rearrangement. The reaction affords the desired compounds from protected asparagine or glutamine in good to high yield, using PhI(OAc)2 as source of iodine(III).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Per quanto riguarda le costruzioni in conglomerato cementizio armato gettato in opera, i sistemi strutturali più comunemente utilizzati sono quelli a telaio (con trasmissione di momento flettente), a setti portanti o una combinazione di entrambi. A partire dagli anni ’60, numerosissimi sono stati gli studi relativamente al comportamento sismico di strutture in c.a. a telaio. Lo stesso si può affermare per le costruzioni costituite da pareti miste a telai. In particolare, l’argomento della progettazione sismica di tali tipologie di edifici ha sempre riguardato soprattutto gli edifici alti nei quali, evidentemente, l’impiego delle pareti avveniva allo scopo di limitarne la elevata deformabilità. Il comportamento sismico di strutture realizzate interamente a pareti portanti in c.a. è stato meno studiato negli anni, nonostante si sia osservato che edifici realizzati mediante tali sistemi strutturali abbiano mostrato, in generale, pregevoli risorse di resistenza nei confronti di terremoti anche di elevata intensità. Negli ultimi 10 anni, l’ingegneria sismica si sta incentrando sull’approfondimento delle risorse di tipologie costruttive di cui si è sempre fatto largo uso in passato (tipicamente nei paesi dell’Europa continentale, in America latina, negli USA e anche in Italia), ma delle quali mancavano adeguate conoscenze scientifiche relativamente al loro comportamento in zona sismica. Tali tipologie riguardano sostanzialmente sistemi strutturali interamente costituiti da pareti portanti in c.a. per edifici di modesta altezza, usualmente utilizzati in un’edilizia caratterizzata da ridotti costi di realizzazione (fabbricati per abitazioni civili e/o uffici). Obiettivo “generale” del lavoro di ricerca qui presentato è lo studio del comportamento sismico di strutture realizzate interamente a setti portanti in c.a. e di modesta altezza (edilizia caratterizzata da ridotti costi di realizzazione). In particolare, le pareti che si intendono qui studiare sono caratterizzate da basse percentuali geometriche di armatura e sono realizzate secondo la tecnologia del cassero a perdere. A conoscenza dello scrivente, non sono mai stati realizzati, fino ad oggi, studi sperimentali ed analitici allo scopo di determinare il comportamento sismico di tali sistemi strutturali, mentre è ben noto il loro comportamento statico. In dettaglio, questo lavoro di ricerca ha il duplice scopo di: • ottenere un sistema strutturale caratterizzato da elevate prestazioni sismiche; • mettere a punto strumenti applicativi (congruenti e compatibili con le vigenti normative e dunque immediatamente utilizzabili dai progettisti) per la progettazione sismica dei pannelli portanti in c.a. oggetto del presente studio. Al fine di studiare il comportamento sismico e di individuare gli strumenti pratici per la progettazione, la ricerca è stata organizzata come segue: • identificazione delle caratteristiche delle strutture studiate, mediante lo sviluppo/specializzazione di opportune formulazioni analitiche; • progettazione, supervisione, ed interpretazione di una estesa campagna di prove sperimentali eseguita su pareti portanti in c.a. in vera grandezza, al fine di verificarne l’efficace comportamento sotto carico ciclico; • sviluppo di semplici indicazioni (regole) progettuali relativamente alle strutture a pareti in c.a. studiate, al fine di ottenere le caratteristiche prestazionali desiderate. I risultati delle prove sperimentali hanno mostrato di essere in accordo con le previsioni analitiche, a conferma della validità degli strumenti di predizione del comportamento di tali pannelli. Le elevatissime prestazioni riscontrate sia in termini di resistenza che in termini di duttilità hanno evidenziato come le strutture studiate, così messe a punto, abbiano manifestato un comportamento sismico più che soddisfacente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regenerative medicine claims for a better understanding of the cause-effect relation between cell behaviour and environment signals. The latter encompasses topographical, chemical and mechanical stimuli, electromagnetic fields, gradients of chemo-attractants and haptotaxis. In this perspective, a spatial control of the structures composing the environment is required. In this thesis I describe a novel approach for the multiscale patterning of biocompatible functional materials in order to provide systems able to accurately control cell adhesion and proliferation. The behaviour of different neural cell lines in response to several stimuli, specifically chemical, topographical and electrical gradients is presented. For each of the three kind of signals, I chose properly tailored materials and fabrication and characterization techniques. After a brief introduction on the state of art of nanotechnology, nanofabrication techniques and regenerative medicine in Chapter 1 and a detailed description of the main fabrication and characterization techniques employed in this work in Chapter 2, in Chapter 3 an easy route to obtain accurate control over cell proliferation close to 100% is described (chemical control). In Chapter 4 (topographical control) it is shown how the multiscale patterning of a well-established biocompatible material as titanium dioxide provides a versatile and robust method to study the effect of local topography on cell adhesion and growth. The third signal, viz. electric field, is investigated in Chapter 5 (electrical control), where the very early stages of neural cell adhesion are studied in the presence of modest steady electric fields. In Chapter 6 (appendix) a new patterning technique, called Lithographically Controlled Etching (LCE), is proposed. It is shown how LCE can provide at the same time the micro/nanostructuring and functionalization of a surface with nanosized objects, thus being suitable for applications both in regenerative medicine in biosensing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is the study of techniques for efficient management and use of the spectrum based on cognitive radio technology. The ability of cognitive radio technologies to adapt to the real-time conditions of its operating environment, offers the potential for more flexible use of the available spectrum. In this context, the international interest is particularly focused on the “white spaces” in the UHF band of digital terrestrial television. Spectrum sensing and geo-location database have been considered in order to obtain information on the electromagnetic environment. Different methodologies have been considered in order to investigate spectral resources potentially available for the white space devices in the TV band. The adopted methodologies are based on the geo-location database approach used either in autonomous operation or in combination with sensing techniques. A novel and computationally efficient methodology for the calculation of the maximum permitted white space device EIRP is then proposed. The methodology is suitable for implementation in TV white space databases. Different Italian scenarios are analyzed in order to identify both the available spectrum and the white space device emission limits. Finally two different applications of cognitive radio technology are considered. The first considered application is the emergency management. The attention is focused on the consideration of both cognitive and autonomic networking approaches when deploying an emergency management system. The cognitive technology is then considered in applications related to satellite systems. In particular a hybrid cognitive satellite-terrestrial is introduced and an analysis of coexistence between terrestrial and satellite networks by considering a cognitive approach is performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La valutazione dei rischi associati all’operatività dei sistemi di stoccaggio, quali la sismicità indotta e la subsidenza, è requisito basilare per una loro corretta gestione e progettazione, e passa attraverso la definizione dell’influenza sullo stato tensionale delle variazioni di pressione di poro nel sottosuolo. Principale scopo di questo progetto è lo sviluppo di una metodologia in grado di quantificare le deformazioni dei reservoir in funzione della pressione di poro, di tarare i modelli utilizzati con casi studio che presentino dati di monitoraggio reali, tali da consentire un confronto con le previsioni di modello. In questa tesi, la teoria delle inomogeneità è stata utilizzata, tramite un approccio semianalitico, per definire le variazioni dei campi elastici derivanti dalle operazioni di prelievo e immissione di fluidi in serbatoi geologici. Estensione, forma e magnitudo delle variazioni di stress indotte sono state valutate tramite il concetto di variazione dello sforzo critico secondo il criterio di rottura di Coulomb, tramite un’analisi numerica agli elementi finiti. La metodologia sviluppata è stata applicata e tarata su due reservoir sfruttati e riconvertiti a sistemi di stoccaggio che presentano dataset, geologia, petrofisica, e condizioni operative differenti. Sono state calcolate le variazioni dei campi elastici e la subsidenza; è stata mappata la variazione di sforzo critico di Coulomb per entrambi i casi. I risultati ottenuti mostrano buon accordo con le osservazioni dei monitoraggi, suggerendo la bontà della metodologia e indicando la scarsa probabilità di sismicità indotta. Questo progetto ha consentito la creazione di una piattaforma metodologica di rapido ed efficace utilizzo, per stimare l’influenza dei sistemi di stoccaggio di gas sullo stato tensionale della crosta terrestre; in fase di stoccaggio, permette di monitorare le deformazioni e gli sforzi indotti; in fase di progettazione, consente di valutare le strategie operative per monitorare e mitigare i rischi geologici associati a questi sistemi.