946 resultados para LINEAR-ANALYSIS
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
Introduction: Nocturnal frontal lobe epilepsy (NFLE) is a distinct syndrome of partial epilepsy whose clinical features comprise a spectrum of paroxysmal motor manifestations of variable duration and complexity, arising from sleep. Cardiovascular changes during NFLE seizures have previously been observed, however the extent of these modifications and their relationship with seizure onset has not been analyzed in detail. Objective: Aim of present study is to evaluate NFLE seizure related changes in heart rate (HR) and in sympathetic/parasympathetic balance through wavelet analysis of HR variability (HRV). Methods: We evaluated the whole night digitally recorded video-polysomnography (VPSG) of 9 patients diagnosed with NFLE with no history of cardiac disorders and normal cardiac examinations. Events with features of NFLE seizures were selected independently by three examiners and included in the study only if a consensus was reached. Heart rate was evaluated by measuring the interval between two consecutive R-waves of QRS complexes (RRi). RRi series were digitally calculated for a period of 20 minutes, including the seizures and resampled at 10 Hz using cubic spline interpolation. A multiresolution analysis was performed (Daubechies-16 form), and the squared level specific amplitude coefficients were summed across appropriate decomposition levels in order to compute total band powers in bands of interest (LF: 0.039062 - 0.156248, HF: 0.156248 - 0.624992). A general linear model was then applied to estimate changes in RRi, LF and HF powers during three different period (Basal) (30 sec, at least 30 sec before seizure onset, during which no movements occurred and autonomic conditions resulted stationary); pre-seizure period (preSP) (10 sec preceding seizure onset) and seizure period (SP) corresponding to the clinical manifestations. For one of the patients (patient 9) three seizures associated with ictal asystole were recorded, hence he was treated separately. Results: Group analysis performed on 8 patients (41 seizures) showed that RRi remained unchanged during the preSP, while a significant tachycardia was observed in the SP. A significant increase in the LF component was instead observed during both the preSP and the SP (p<0.001) while HF component decreased only in the SP (p<0.001). For patient 9 during the preSP and in the first part of SP a significant tachycardia was observed associated with an increased sympathetic activity (increased LF absolute values and LF%). In the second part of the SP a progressive decrease in HR that gradually exceeded basal values occurred before IA. Bradycardia was associated with an increase in parasympathetic activity (increased HF absolute values and HF%) contrasted by a further increase in LF until the occurrence of IA. Conclusions: These data suggest that changes in autonomic balance toward a sympathetic prevalence always preceded clinical seizure onset in NFLE, even when HR changes were not yet evident, confirming that wavelet analysis is a sensitive technique to detect sudden variations of autonomic balance occurring during transient phenomena. Finally we demonstrated that epileptic asystole is associated with a parasympathetic hypertonus counteracted by a marked sympathetic activation.
Resumo:
The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.
Resumo:
In dieser Arbeit werden Strukturen beschrieben, die mit Polymeren auf Oberflächen erzeugt wurden. Die Anwendungen reichen von PMMA und PNIPAM Polymerbürsten, über die Restrukturierung von Polystyrol durch Lösemittel bis zu 3D-Strukturen, die aus PAH/ PSS Polyelektrolytmultischichten bestehen. Im ersten Teil werden Polymethylmethacrylat (PMMA) Bürsten in der ionischen Flüssigkeit 1-Butyl-3-Methylimidazolium Hexafluorophospat ([Bmim][PF6]) durch kontrollierte radikalische Polymerisation (ATRP) hergestellt. Kinetische Untersuchungen zeigten ein lineares und dichtes Bürstenwachstum mit einer Wachstumsrate von 4600 g/mol pro nm. Die durchschnittliche Pfropfdichte betrug 0.36 µmol/m2. Als Anwendung wurden Mikrotropfen bestehend aus der ionischen Flüssigkeit, Dimethylformamid und dem ATRP-Katalysator benutzt, um in einer definierten Geometrie Polymerbürsten auf Silizium aufzubringen. Auf diese Weise lässt sich eine bis zu 13 nm dicke Beschichtung erzeugen. Dieses Konzept ist durch die Verdampfung des Monomers Methylmethacrylat (MMA) limitiert. Aus einem 1 µl großen Tropfen aus ionischer Flüssigkeit und MMA (1:1) verdampft MMA innerhalb von 100 s. Daher wurde das Monomer sequentiell zugegeben. Der zweite Teil konzentriert sich auf die Strukturierung von Oberflächen mit Hilfe einer neuen Methode: Tintendruck. Ein piezoelektrisch betriebenes „Drop-on-Demand“ Drucksystem wurde verwendet, um Polystyrol mit 0,4 nl Tropfen aus Toluol zu strukturieren. Die auf diese Art und Weise gebildeten Mikrokrater können Anwendung als Mikrolinsen finden. Die Brennweite der Mikrolinsen kann über die Anzahl an Tropfen, die für die Strukturierung verwendet werden, eingestellt werden. Theoretisch und experimentell wurde die Brennweite im Bereich von 4,5 mm bis 0,21 mm ermittelt. Der zweite Strukturierungsprozess nutzt die Polyelektrolyte Polyvinylamin-Hydrochlorid (PAH) und Polystyrolsulfonat (PSS), um 3D-Strukturen wie z.B. Linien, Schachbretter, Ringe, Stapel mit einer Schicht für Schicht Methode herzustellen. Die Schichtdicke für eine Doppelschicht (DS) liegt im Bereich von 0.6 bis 1.1 nm, wenn NaCl als Elektrolyt mit einer Konzentration von 0,5 mol/l eingesetzt wird. Die Breite der Strukturen beträgt im Mittel 230 µm. Der Prozess wurde erweitert, um Nanomechanische Cantilever Sensoren (NCS) zu beschichten. Auf einem Array bestehend aus acht Cantilevern wurden je zwei Cantilever mit fünf Doppelschichten PAH/ PSS und je zwei Cantilever mit zehn Doppelschichten PAH/ PSS schnell und reproduzierbar beschichtet. Die Massenänderung für die individuellen Cantilever war 0,55 ng für fünf Doppelschichten und 1,08 ng für zehn Doppelschichten. Der daraus resultierende Sensor wurde einer Umgebung mit definierter Luftfeuchtigkeit ausgesetzt. Die Cantilever verbiegen sich durch die Ausdehnung der Beschichtung, da Wasser in das Polymer diffundiert. Eine maximale Verbiegung von 442 nm bei 80% Luftfeuchtigkeit wurde für die mit zehn Doppelschichten beschichteten Cantilever gefunden. Dies entspricht einer Wasseraufnahme von 35%. Zusätzlich konnte aus den Verbiegungsdaten geschlossen werden, dass die Elastizität der Polyelektrolytmultischichten zunimmt, wenn das Polymer gequollen ist. Das thermische Verhalten in Wasser wurde im nächsten Teil an nanomechanischen Cantilever Sensoren, die mit Poly(N-isopropylacrylamid)bürsten (PNIPAM) und plasmapolymerisiertem N,N-Diethylacrylamid beschichtet waren, untersucht. Die Verbiegung des Cantilevers zeigte zwei Bereiche: Bei Temperaturen kleiner der niedrigsten kritischen Temperatur (LCST) ist die Verbiegung durch die Dehydration der Polymerschicht dominiert und bei Temperaturen größer der niedrigsten kritischen Temperatur (LCST) reagiert der Cantilever Sensor überwiegend auf Relaxationsprozesse innerhalb der kollabierten Polymerschicht. Es wurde gefunden, dass das Minimum in der differentiellen Verbiegung mit der niedrigsten kritischen Temperatur von 32°C und 44°C der ausgewählten Polymeren übereinstimmt. Im letzten Teil der Arbeit wurden µ-Reflektivitäts- und µ-GISAXS Experimente eingeführt als neue Methoden, um mikrostrukturierte Proben wie NCS oder PEM Linien mit Röntgenstreuung zu untersuchen. Die Dicke von jedem individuell mit PMMA Bürsten beschichtetem NCS ist im Bereich von 32,9 bis 35,2 nm, was mit Hilfe von µ-Reflektivitätsmessungen bestimmt wurde. Dieses Ergebnis kann mit abbildender Ellipsometrie als komplementäre Methode mit einer maximalen Abweichung von 7% bestätigt werden. Als zweites Beispiel wurde eine gedruckte Polyelektrolytmultischicht aus PAH/PSS untersucht. Die Herstellungsprozedur wurde so modifiziert, dass Goldnanopartikel in die Schichtstruktur eingebracht wurden. Durch Auswertung eines µ-GISAXS Experiments konnte der Einbau der Partikel identifiziert werden. Durch eine Anpassung mit einem Unified Fit Modell wurde herausgefunden, dass die Partikel nicht agglomeriert sind und von einer Polymermatrix umgeben sind.
Resumo:
Procedures for quantitative walking analysis include the assessment of body segment movements within defined gait cycles. Recently, methods to track human body motion using inertial measurement units have been suggested. It is not known if these techniques can be readily transferred to clinical measurement situations. This work investigates the aspects necessary for one inertial measurement unit mounted on the lower back to track orientation, and determine spatio-temporal features of gait outside the confines of a conventional gait laboratory. Apparent limitations of different inertial sensors can be overcome by fusing data using methods such as a Kalman filter. The benefits of optimizing such a filter for the type of motion are unknown. 3D accelerations and 3D angular velocities were collected for 18 healthy subjects while treadmill walking. Optimization of Kalman filter parameters improved pitch and roll angle estimates when compared to angles derived using stereophotogrammetry. A Weighted Fourier Linear Combiner method for estimating 3D orientation angles by constructing an analytical representation of angular velocities and allowing drift free integration is also presented. When tested this method provided accurate estimates of 3D orientation when compared to stereophotogrammetry. Methods to determine spatio-temporal features from lower trunk accelerations generally require knowledge of sensor alignment. A method was developed to estimate the instants of initial and final ground contact from accelerations measured by a waist mounted inertial device without rigorous alignment. A continuous wavelet transform method was used to filter and differentiate the signal and derive estimates of initial and final contact times. The technique was tested with data recorded for both healthy and pathologic (hemiplegia and Parkinson’s disease) subjects and validated using an instrumented mat. The results show that a single inertial measurement unit can assist whole body gait assessment however further investigation is required to understand altered gait timing in some pathological subjects.
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.
Resumo:
The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.
Resumo:
Die lösliche Epoxidhydrolase (sEH) gehört zur Familie der Epoxidhydrolase-Enzyme. Die Rolle der sEH besteht klassischerweise in der Detoxifikation, durch Umwandlung potenziell schädlicher Epoxide in deren unschädliche Diol-Form. Hauptsächlich setzt die sEH endogene, der Arachidonsäure verwandte Signalmoleküle, wie beispielsweise die Epoxyeicosatrienoic acid, zu den entsprechenden Diolen um. Daher könnte die sEH als ein Zielenzym in der Therapie von Bluthochdruck und Entzündungen sowie diverser anderer Erkrankungen eingesetzt werden. rnDie sEH ist ein Homodimer, in dem jede Untereinheit aus zwei Domänen aufgebaut ist. Das katalytische Zentrum der Epoxidhydrolaseaktivität befindet sich in der 35 kD großen C-terminalen Domäne. Dieser Bereich der sEH s wurde bereits im Detail untersucht und nahezu alle katalytischen Eigenschaften des Enzyms sowie deren dazugehörige Funktionen sind in Zusammenhang mit dieser Domäne bekannt. Im Gegensatz dazu ist über die 25 kD große N-terminale Domäne wenig bekannt. Die N-terminale Domäne der sEH wird zur Haloacid Dehalogenase (HAD) Superfamilie von Hydrolasen gezählt, jedoch war die Funktion dieses N-terminal Domäne lange ungeklärt. Wir haben in unserer Arbeitsgruppe zum ersten Mal zeigen können, dass die sEH in Säugern ein bifunktionelles Enzym ist, welches zusätzlich zur allgemein bekannten Enzymaktivität im C-terminalen Bereich eine weitere enzymatische Funktion mit Mg2+-abhängiger Phosphataseaktivität in der N-terminalen Domäne aufweist. Aufgrund der Homologie der N-terminalen Domäne mit anderen Enzymen der HAD Familie wird für die Ausübung der Phosphatasefunktion (Dephosphorylierung) eine Reaktion in zwei Schritten angenommen.rnUm den katalytischen Mechanismus der Dephosphorylierung weiter aufzuklären, wurden biochemische Analysen der humanen sEH Phosphatase durch Generierung von Mutationen im aktiven Zentrum mittels ortsspezifischer Mutagenese durchgeführt. Hiermit sollten die an der katalytischen Aktivität beteiligten Aminosäurereste im aktiven Zentrum identifiziert und deren Rolle bei der Dephosphorylierung spezifiziert werden. rnrnAuf Basis der strukturellen und möglichen funktionellen Ähnlichkeiten der sEH und anderen Mitgliedern der HAD Superfamilie wurden Aminosäuren (konservierte und teilweise konservierte Aminosäuren) im aktiven Zentrum der sEH Phosphatase-Domäne als Kandidaten ausgewählt.rnVon den Phosphatase-Domäne bildenden Aminosäuren wurden acht ausgewählt (Asp9 (D9), Asp11 (D11), Thr123 (T123), Asn124 (N124), Lys160 (K160), Asp184 (D184), Asp185 (D185), Asn189 (N189)), die mittels ortsspezifischer Mutagenese durch nicht funktionelle Aminosäuren ausgetauscht werden sollten. Dazu wurde jede der ausgewählten Aminosäuren durch mindestens zwei alternative Aminosäuren ersetzt: entweder durch Alanin oder durch eine Aminosäure ähnlich der im Wildtyp-Enzym. Insgesamt wurden 18 verschiedene rekombinante Klone generiert, die für eine mutante sEH Phosphatase Domäne kodieren, in dem lediglich eine Aminosäure gegenüber dem Wildtyp-Enzym ersetzt wurde. Die 18 Mutanten sowie das Wildtyp (Sequenz der N-terminalen Domäne ohne Mutation) wurden in einem Expressionsvektor in E.coli kloniert und die Nukleotidsequenz durch Restriktionsverdau sowie Sequenzierung bestätigt. Die so generierte N-terminale Domäne der sEH (25kD Untereinheit) wurde dann mittels Metallaffinitätschromatographie erfolgreich aufgereinigt und auf Phosphataseaktivität gegenüber des allgemeinen Substrats 4-Nitophenylphosphat getestet. Diejenigen Mutanten, die Phosphataseaktivität zeigten, wurden anschließend kinetischen Tests unterzogen. Basiered auf den Ergebnissen dieser Untersuchungen wurden kinetische Parameter mittels vier gut etablierter Methoden berechnet und die Ergebnisse mit der „direct linear blot“ Methode interpretiert. rnDie Ergebnisse zeigten, dass die meisten der 18 generierten Mutanten inaktiv waren oder einen Großteil der Enzymaktivität (Vmax) gegenüber dem Wildtyp verloren (WT: Vmax=77.34 nmol-1 mg-1 min). Dieser Verlust an Enzymaktivität ließ sich nicht durch einen Verlust an struktureller Integrität erklären, da der Wildtyp und die mutanten Proteine in der Chromatographie das gleiche Verhalten zeigten. Alle Aminosäureaustausche Asp9 (D9), Lys160 (K160), Asp184 (D184) und Asn189 (N189) führten zum kompletten Verlust der Phosphataseaktivität, was auf deren katalytische Funktion im N-terminalen Bereich der sEH hindeutet. Bei einem Teil der Aminosäureaustausche die für Asp11 (D11), Thr123 (T123), Asn124 (N124) und Asn185 (D185) durchgeführt wurden, kam es, verglichen mit dem Wildtyp, zu einer starken Reduktion der Phosphataseaktivität, die aber dennoch für die einzelnen Proteinmutanten in unterschiedlichem Ausmaß zu messen war (2 -10% and 40% of the WT enzyme activity). Zudem zeigten die Mutanten dieser Gruppe veränderte kinetische Eigenschaften (Vmax allein oder Vmax und Km). Dabei war die kinetische Analyse des Mutanten Asp11 Asn aufgrund der nur bei dieser Mutanten detektierbaren starken Vmax Reduktion (8.1 nmol-1 mg-1 min) und einer signifikanten Reduktion der Km (Asp11: Km=0.54 mM, WT: Km=1.3 mM), von besonderem Interesse und impliziert eine Rolle von Asp11 (D11) im zweiten Schritt der Hydrolyse des katalytischen Zyklus.rnZusammenfassend zeigen die Ergebnisse, dass alle in dieser Arbeit untersuchten Aminosäuren für die Phosphataseaktivität der sEH nötig sind und das aktive Zentrum der sEH Phosphatase im N-terminalen Bereich des Enzyms bilden. Weiterhin tragen diese Ergebnisse zur Aufklärung der potenziellen Rolle der untersuchten Aminosäuren bei und unterstützen die Hypothese, dass die Dephosphorylierungsreaktion in zwei Schritten abläuft. Somit ist ein kombinierter Reaktionsmechanismus, ähnlich denen anderer Enzyme der HAD Familie, für die Ausübung der Dephosphorylierungsfunktion denkbar. Diese Annahme wird gestützt durch die 3D-Struktur der N-terminalen Domäne, den Ergebnissen dieser Arbeit sowie Resultaten weiterer biochemischer Analysen. Der zweistufige Mechanismus der Dephosphorylierung beinhaltet einen nukleophilen Angriff des Substratphosphors durch das Nukleophil Asp9 (D9) des aktiven Zentrums unter Bildung eines Acylphosphat-Enzym-Zwischenprodukts, gefolgt von der anschließenden Freisetzung des dephosphorylierten Substrats. Im zweiten Schritt erfolgt die Hydrolyse des Enzym-Phosphat-Zwischenprodukts unterstützt durch Asp11 (D11), und die Freisetzung der Phosphatgruppe findet statt. Die anderen untersuchten Aminosäuren sind an der Bindung von Mg 2+ und/oder Substrat beteiligt. rnMit Hilfe dieser Arbeit konnte der katalytischen Mechanismus der sEH Phosphatase weiter aufgeklärt werden und wichtige noch zu untersuchende Fragestellungen, wie die physiologische Rolle der sEH Phosphatase, deren endogene physiologische Substrate und der genaue Funktionsmechanismus als bifunktionelles Enzym (die Kommunikation der zwei katalytischen Einheiten des Enzyms) wurden aufgezeigt und diskutiert.rn
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
Nell'ambito delle nanostrutture, un ruolo primario è svolto dai punti quantici. In questo lavoro siamo interessati all'analisi teorica del processo di creazione dei punti quantici: esso può avvenire per eteroepitassia, in particolare secondo il metodo studiato da Stranski-Krastanov. Un film di Germanio viene depositato su un substrato di Silicio in modo coerente, cioè senza dislocazioni, e, a causa del misfit tra le maglie dei due materiali, c'è un accumulo di energia elastica nel film. A una certa altezza critica questa energia del film può essere ridotta se il film si organizza in isole (punti quantici), dove la tensione può essere rilassata lateralmente. L'altezza critica dipende dai moduli di Young (E, υ), dal misfit tra le maglie (m) e dalla tensione superficiali (γ). Il trasporto di materiale nel film è portato avanti per diffusione superficiale. Il punto focale nell'analisi delle instabilità indotte dal misfit tra le maglie dei materiali è la ricerca delle caratteristiche che individuano il modo di crescita più rapido dei punti quantici. In questo lavoro siamo interessati ad un caso particolare: la crescita di punti quantici non su una superficie piana ma sulla superficie di un nanofilo quantico a geometria cilindrica. L'analisi delle instabilità viene condotta risolvendo le equazioni all'equilibrio: a tal fine sono state calcolate le distribuzioni del tensore delle deformazioni e degli sforzo di un nanofilo core-shell con una superficie perturbata al primo ordine rispetto all'ampiezza della perturbazione. L'analisi è stata condotta con particolari condizioni al contorno ed ipotesi geometriche, e diverse scelte dello stato di riferimento del campo degli spostamenti. Risolto il problema elastico, è stata studiata l'equazione dinamica di evoluzione descrivente la diffusione di superficie. Il risultato dell'analisi di instabilità è il tasso di crescita in funzione del numero d'onda q, con diversi valori del raggio del core, spessore dello shell e modo normale n, al fine di trovare il più veloce modo di crescita della perturbazione.
Resumo:
Cardiotocography (CTG) is a widespread foetal diagnostic methods. However, it lacks of objectivity and reproducibility since its dependence on observer's expertise. To overcome these limitations, more objective methods for CTG interpretation have been proposed. In particular, many developed techniques aim to assess the foetal heart rate variability (FHRV). Among them, some methodologies from nonlinear systems theory have been applied to the study of FHRV. All the techniques have proved to be helpful in specific cases. Nevertheless, none of them is more reliable than the others. Therefore, an in-depth study is necessary. The aim of this work is to deepen the FHRV analysis through the Symbolic Dynamics Analysis (SDA), a nonlinear technique already successfully employed for HRV analysis. Thanks to its simplicity of interpretation, it could be a useful tool for clinicians. We performed a literature study involving about 200 references on HRV and FHRV analysis; approximately 100 works were focused on non-linear techniques. Then, in order to compare linear and non-linear methods, we carried out a multiparametric study. 580 antepartum recordings of healthy fetuses were examined. Signals were processed using an updated software for CTG analysis and a new developed software for generating simulated CTG traces. Finally, statistical tests and regression analyses were carried out for estimating relationships among extracted indexes and other clinical information. Results confirm that none of the employed techniques is more reliable than the others. Moreover, in agreement with the literature, each analysis should take into account two relevant parameters, the foetal status and the week of gestation. Regarding the SDA, results show its promising capabilities in FHRV analysis. It allows recognizing foetal status, gestation week and global variability of FHR signals, even better than other methods. Nevertheless, further studies, which should involve even pathological cases, are necessary to establish its reliability.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Natural systems face pressures exerted by natural physical-chemical forcings and a myriad of co-occurring human stressors that may interact to cause larger than expected effects, thereby presenting a challenge to ecosystem management. This thesis aimed to develop new information that can contribute to reduce the existing knowledge gaps hampering the holistic management of multiple stressors. I undertook a review of the state-of-the-art methods to detect, quantify and predict stressor interactions, identifying techniques that could be applied in this thesis research. Then, I conducted a systematic review of saltmarsh multiple stressor studies in conjunction with a multiple stressor mapping exercise for the study system in order to infer potential important synergistic stressor interactions. This analysis identified key stressors that are affecting the study system, but also pointed to data gaps in terms of driver and pressure data and raised issues for potentially overlooked stressors. Using field mesocosms, I explored how a local stressor (nutrient availability) affects the responses of saltmarsh vegetation to a global stressor (increased inundation) in different soil types. Results indicate that saltmarsh vegetation would be more drastically affected by increased inundation in low than in medium organic matter soils, and especially in estuaries already under high nutrient availability. In another field experiment, I examined the challenges of managing co-occurring and potentially interacting local stressors on saltmarsh vegetation: recreational trampling and smothering by deposition of excess macroalgal wrack due to high nutrient loads. Trampling and wrack prevention had interacting effects, causing non-linear responses of the vegetation to simulated management of these stressors, such that vegetation recovered only in those treatments simulating the combined prevention of both stressors. During this research I detected, using molecular genetic methods, a widespread presence of S. anglica (and to a lesser extent S. townsendii), two previously unrecorded non-native Spartinas in the study areas.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.