58 resultados para 982[Guido]

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study deals with the protection of social rights in Europe and aims to outline the position currently held by these rights in the EU law. The first two chapters provide an overview of the regulatory framework in which the social rights lie, through the reorganisation of international sources. In particular the international instruments of protection of social rights are taken into account, both at the universal level, due to the activity of the United Nations Organisation and of its specialized agency, the International Labour Organization, and at a regional level, related to the activity of the Council of Europe. Finally an analysis of sources concludes with the reconstruction of the stages of the recognition of social rights in the EU. The second chapter describes the path followed by social rights in the EU: it examines the founding Treaties and subsequent amendments, the Charter of Fundamental Social Rights of Workers of 1989 and, in particularly, the Charter of Fundamental Rights of the European Union, the legal status of which was recently treated as the primary law by the Treaty of Lisbon signed in December 2007. The third chapter is, then, focused on the analysis of the substantive aspects of the recognition of the rights made by the EU: it provides a framework of the content and scope of the rights accepted in the Community law by the Charter of Fundamental Rights, which is an important contribution to the location of the social rights among the fundamental and indivisible rights of the person. In the last section of the work, attention is focused on the two profiles of effectiveness and justiciability of social rights, in order to understand the practical implications of the gradual creation of a system of protection of these rights at Community level. Under the first profile, the discussion is focused on the effectiveness in the general context of the mechanisms of implementation of the “second generation” rights, with particular attention to the new instruments and actors of social Europe and the effect of the procedures of soft law. Second part of chapter four, finally, deals with the judicial protection of rights in question. The limits of the jurisprudence of the European Union Court of Justice are more obvious exactly in the field of social rights, due to the gap between social rights and other fundamental rights. While, in fact, the Community Court ensures the maximum level of protection to human rights and fundamental freedoms, social rights are often degraded into mere aspirations of EU institutions and its Member States. That is, the sources in the social field (European Social Charter and Community Charter) represent only the base for interpretation and application of social provisions of secondary legislation, unlike the ECHR, which is considered by the Court part of Community law. Moreover, the Court of Justice is in the middle of the difficult comparison between social values and market rules, of which it considers the need to make a balance: despite hesitancy to recognise the juridical character of social rights, the need of protection of social interests has justified, indeed, certain restrictions to the free movement of goods, freedom to provide services or to Community competition law. The road towards the recognition and the full protection of social rights in the European Union law appears, however, still long and hard, as shown by the recent judgments Laval and Viking, in which the Community court, while enhancing the Nice Charter, has not given priority to fundamental social rights, giving them the role of limits (proportionate and justified) of economic freedoms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Produttività ed efficienza sono termini comunemente utilizzati per caratterizzare l’abilità di un’impresa nell’utilizzazione delle risorse, sia in ambito privato che pubblico. Entrambi i concetti sono legati da una teoria della produzione che diventa essenziale per la determinazione dei criteri base con i quali confrontare i risultati dell’attività produttiva e i fattori impiegati per ottenerli. D’altronde, le imprese scelgono di produrre e di investire sulla base delle proprie prospettive di mercato e di costi dei fattori. Quest’ultimi possono essere influenzati dalle politiche dello Stato che fornisce incentivi e sussidi allo scopo di modificare le decisioni riguardanti l’allocazione e la crescita delle imprese. In questo caso le stesse imprese possono preferire di non collocarsi nell’equilibrio produttivo ottimo, massimizzando produttività ed efficienza, per poter invece utilizzare tali incentivi. In questo caso gli stessi incentivi potrebbero distorcere quindi l’allocazione delle risorse delle imprese che sono agevolate. L’obiettivo di questo lavoro è quello di valutare attraverso metodologie parametriche e non parametriche se incentivi erogati dalla L. 488/92, la principale politica regionale in Italia nelle regioni meridionali del paese nel periodo 1995-2004, hanno avuto o meno effetti sulla produttività totale dei fattori delle imprese agevolate. Si è condotta una ricognizione rispetto ai principali lavori proposti in letteratura riguardanti la TFP e l’aiuto alle imprese attraverso incentivi al capitale e (in parte) dell’efficienza. La stima della produttività totale dei fattori richiede di specificare una funzione di produzione ponendo l’attenzione su modelli di tipo parametrico che prevedono, quindi, la specificazione di una determinata forma funzionale relativa a variabili concernenti i fattori di produzione. Da questa si è ricavata la Total Factor Productivity utilizzata nell’analisi empirica che è la misura su cui viene valutata l’efficienza produttiva delle imprese. Il campione di aziende è dato dal merge tra i dati della L.488 e i dati di bilancio della banca dati AIDA. Si è provveduto alla stima del modello e si sono approfonditi diversi modelli per la stima della TFP; infine vengono descritti metodi non parametrici (tecniche di matching basate sul propensity score) e metodi parametrici (Diff-In-Diffs) per la valutazione dell’impatto dei sussidi al capitale. Si descrive l’analisi empirica condotta. Nella prima parte sono stati illustrati i passaggi cruciali e i risultati ottenuti a partire dalla elaborazione del dataset. Nella seconda parte, invece, si è descritta la stima del modello per la TFP e confrontate metodologie parametriche e non parametriche per valutare se la politica ha influenzato o meno il livello di TFP delle imprese agevolate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The majority of carbonate reservoir is oil-wet, which is an unfavorable condition for oil production. Generally, the total oil recovery after both primary and secondary recovery in an oil-wet reservoir is low. The amount of producible oil by enhanced oil recovery techniques is still large. Alkali substances are proven to be able to reverse rock wettability from oil-wet to water-wet, which is a favorable condition for oil production. However, the wettability reversal mechanism would require a noneconomical aging period to reach the maximum reversal condition. An intermittent flow with the optimum pausing period is then combined with alkali flooding (combination technique) to increase the wettability reversal mechanism and as a consequence, oil recovery is improved. The aims of this study are to evaluate the efficiency of the combination technique and to study the parameters that affect this method. In order to implement alkali flooding, reservoir rock and fluid properties were gathered, e.g. interfacial tension of fluids, rock wettability, etc. The flooding efficiency curves are obtained from core flooding and used as a major criterion for evaluation the performance of technique. The combination technique improves oil recovery when the alkali concentration is lower than 1% wt. (where the wettability reversal mechanism is dominant). The soap plug (that appears when high alkali concentration is used) is absent in this combination as seen from no drop of production rate. Moreover, the use of low alkali concentration limits alkali loss. This combination probably improves oil recovery also in the fractured carbonate reservoirs in which oil is uneconomically produced. The results from the current study indicate that the combination technique is an option that can improve the production of carbonate reservoirs. And a less quantity of alkali is consumed in the process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Premessa: nell’aprile 2006, l’American Heart Association ha approvato la nuova definizione e classificazione delle cardiomiopatie (B. J. Maron e coll. 2006), riconoscendole come un eterogeneo gruppo di malattie associate a disfunzione meccanica e/o elettrica riconducibili ad un ampia variabilità di cause. La distinzione tra le varie forme si basa non più sui processi etiopatogenetici che ne sono alla base, ma sulla modalità di presentazione clinica della malattia. Si distinguono così le forme primarie, a prevalente od esclusivo interessamento cardiaco, dalle forme secondarie in cui la cardiomiopatia rientra nell’ambito di un disordine sistemico dove sono evidenziabili anche disturbi extracardiaci. La nostra attenzione è, nel presente studio, focalizzata sull’analisi delle cardiomiopatie diagnosticate nei primi anni di vita in cui si registra una più alta incidenza di forme secondarie rispetto all’adulto, riservando un particolare riguardo verso quelle forme associate a disordini metabolici. Nello specifico, il nostro obiettivo è quello di sottolineare l’influenza di una diagnosi precoce sull’evoluzione della malattia. Materiali e metodi: abbiamo eseguito uno studio descrittivo in base ad un’analisi retrospettiva di tutti i pazienti giunti all’osservazione del Centro di Cardiologia e Cardiochirurgia Pediatrica e dell’ Età Evolutiva del Policlinico S. Orsola- Malpighi di Bologna, dal 1990 al 2006, con diagnosi di cardiomiopatia riscontrata nei primi due anni di vita. Complessivamente sono stati studiati 40 pazienti di cui 20 con cardiomiopatia ipertrofica, 18 con cardiomiopatia dilatativa e 2 con cardiomiopatia restrittiva con un’età media alla diagnosi di 4,5 mesi (range:0-24 mesi). Per i pazienti descritti a partire dal 2002, 23 in totale, sono state eseguite le seguenti indagini metaboliche: emogasanalisi, dosaggio della carnitina, metabolismo degli acidi grassi liberi (pre e post pasto), aminoacidemia quantitativa (pre e post pasto), acidi organici, mucopolisaccaridi ed oligosaccaridi urinari, acilcarnitine. Gli stessi pazienti sono stati inoltre sottoposti a prelievo bioptico di muscolo scheletrico per l’analisi ultrastrutturale, e per l’analisi dell’attività enzimatica della catena respiratoria mitocondriale. Nella stessa seduta veniva effettuata la biopsia cutanea per l’eventuale valutazione di deficit enzimatici nei fibroblasti. Risultati: l’età media alla diagnosi era di 132 giorni (range: 0-540 giorni) per le cardiomiopatie ipertrofiche, 90 giorni per le dilatative (range: 0-210 giorni) mentre le 2 bambine con cardiomiopatia restrittiva avevano 18 e 24 mesi al momento della diagnosi. Le indagini metaboliche eseguite sui 23 pazienti ci hanno permesso di individuare 5 bambini con malattia metabolica (di cui 2 deficit severi della catena respiratoria mitocondriale, 1 con insufficienza della β- ossidazione per alterazione delle acilcarnitine , 1 con sindrome di Barth e 1 con malattia di Pompe) e un caso di cardiomiopatia dilatativa associata a rachitismo carenziale. Di questi, 4 sono deceduti e uno è stato perduto al follow-up mentre la forma associata a rachitismo ha mostrato un netto miglioramento della funzionalità cardiaca dopo appropriata terapia con vitamina D e calcio. In tutti la malattia era stata diagnosticata entro l’anno di vita. Ciò concorda con gli studi documentati in letteratura che associano le malattie metaboliche ad un esordio precoce e ad una prognosi infausta. Da un punto di vista morfologico, un’evoluzione severa si associava alla forma dilatativa, ed in particolare a quella con aspetto non compaction del ventricolo sinistro, rispetto alla ipertrofica e, tra le ipertrofiche, alle forme con ostruzione all’efflusso ventricolare. Conclusioni: in accordo con quanto riscontrato in letteratura, abbiamo visto come le cardiomiopatie associate a forme secondarie, ed in particolare a disordini metabolici, sono di più frequente riscontro nella prima infanzia rispetto alle età successive e, per questo, l’esordio molto precoce di una cardiomiopatia deve essere sempre sospettata come l’espressione di una malattia sistemica. Abbiamo osservato, inoltre, una stretta correlazione tra l’età del bambino alla diagnosi e l’evoluzione della cardiomiopatia, registrando un peggioramento della prognosi in funzione della precocità della manifestazione clinica. In particolare la diagnosi eseguita in epoca prenatale si associava, nella maggior parte dei casi, ad un’evoluzione severa, comportandosi come una variabile indipendente da altri fattori prognostici. Riteniamo, quindi, opportuno sottoporre tutti i bambini con diagnosi di cardiomiopatia effettuata nei primi anni di vita ad uno screening metabolico completo volto ad individuare quelle forme per le quali sia possibile intraprendere una terapia specifica o, al contrario, escludere disordini che possano controindicare, o meno, l’esecuzione di un trapianto cardiaco qualora se ne presenti la necessità clinica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La ricerca di Roberta Frigeni, svolta ad ampio spettro diacronico, è condotta su di una campionatura di specula principum - editi ed inediti - elaborati tra XII e XV secolo, e ne indaga il linguaggio quale referente privilegiato, rilevandone persistenze terminologiche e nuclei sintagmatici ricorrenti, al fine di individuare concetti utili a delineare un lessico politico proprio di questa testualità, in corrispondenza al sorgere dell’entità statale europea nel XIII secolo (con particolare riguardo all’area francese, ai regni di Luigi IX e Filippo il Bello). A partire da un’analisi critica delle tesi di Quentin Skinner circa la ‘ridefinizione paradiastolica’ del sistema delle virtù classiche entro il trattato De principatibus, lo studio innesca un percorso di indagine à rebours che - sondando il linguaggio - rintraccia nella trattatistica delle institutiones regum del XV secolo (Pontano, Patrizi, Carafa, Platina) e degli specula principum medievali (Elinando di Froidmont, Gilberto di Tournai, Vincenzo di Beauvais, Guglielmo Peraldo, Egidio Romano, Guido Vernani) una consonanza di motivi nella sintassi e nell’immaginario preposti ad illustrare le potenzialità semantiche del nome di prudentia, individuata quale unica virtù sopravvissuta alla ‘ridescrizione’ del codice etico operata da Machiavelli. Indagando i progressivi ampliamenti del campo semantico sorto attorno al nome della virtù di prudenza entro la letteratura speculare, la ricerca mostra come il dialettico rapporto con i lessemi di sapientia, astutia, fides ed experientia abbia avuto un ruolo determinante per il sorgere di un’immagine del principe emancipata dalla figura biblica del “rex sapiens”, e per la formazione di un lessico ospitale delle manifestazioni concrete del vivere politico ed economico. I processi di dilatazione e rarefazione del bacino semantico di prudentia sono, infatti, funzionali ad illustrare come il linguaggio della testualità speculare registri l’acquisizione di nuove strumentazioni teoriche grazie al rinnovamento delle fonti a disposizione lungo il secolo XIII, che - sostituendo progressivamente il più recente dossier aristotelico al solo apparato veterotestamentario - permettono di integrare la concezione delle virtù in senso operativo, adattandola alle esigenze politico-economiche dei nuovi contesti istituzionali monarchici.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.