913 resultados para Partial Least Squares
Resumo:
Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.
Resumo:
[EN]This paper shows a finite element method for pollutant transport with several pollutant sources. An Eulerian convection–diffusion–reaction model to simulate the pollutant dispersion is used. The discretization of the different sources allows to impose the emissions as boundary conditions. The Eulerian description can deal with the coupling of several plumes. An adaptive stabilized finite element formulation, specifically Least-Squares, with a Crank-Nicolson temporal integration is proposed to solve the problem. An splitting scheme has been used to treat separately the transport and the reaction. A mass-consistent model has been used to compute the wind field of the problem…
Resumo:
The recent default of important Italian agri-business companies provides a challenging issue to be investigated through an appropriate scientific approach. The events involving CIRIO, FERRUZZI or PARMALAT rise an important research question: what are the determinants of performance for Italian companies in the Italian agri – food sector? My aim is not to investigate all the factors that are relevant in explaining performance. Performance depends on a wide set of political, social, economic variables that are strongly interconnected and that are often very difficult to express by formal or mathematical tools. Rather, in my thesis I mainly focus on those aspects that are strictly related to the governance and ownership structure of agri – food companies representing a strand of research that has been quite neglected by previous scholars. The conceptual framework from which I move to justify the existence of a relationship between the ownership structure of a company, governance and performance is the model set up by Airoldi and Zattoni (2005). In particular the authors investigate the existence of complex relationships arising within the company and between the company and the environment that can bring different strategies and performances. They do not try to find the “best” ownership structure, rather they outline what variables are connected and how they could vary endogenously within the whole economic system. In spite of the fact that the Airoldi and Zattoni’s model highlights the existence of a relationship between ownership and structure that is crucial for the set up of the thesis the authors fail to apply quantitative analyses in order to verify the magnitude, sign and the causal direction of the impact. In order to fill this gap we start from the literature trying to investigate the determinants of performance. Even in this strand of research studies analysing the relationship between different forms of ownership and performance are still lacking. In this thesis, after a brief description of the Italian agri – food sector and after an introduction including a short explanation of the definitions of performance and ownership structure, I implement a model in which the performance level (interpreted here as Return on Investments and Return on Sales) is related to variables that have been previously identified by the literature as important such as the financial variables (cash and leverage indices), the firm location (North Italy, Centre Italy, South Italy), the power concentration (lower than 25%, between 25% and 50% and between 50% and 100% of ownership control) and the specific agri – food sector (agriculture, food and beverage). Moreover we add a categorical variable representing different forms of ownership structure (public limited company, limited liability company, cooperative) that is the core of our study. All those variables are fully analysed by a preliminary descriptive analysis. As in many previous contributions we apply a panel least squares analysis for 199 Italian firms in the period 1998 – 2007 with data taken from the Bureau Van Dijck Dataset. We apply two different models in which the dependant variables are respectively the Return on Investments (ROI) and the Return on Sales (ROS) indicators. Not surprisingly we find that companies located in the North Italy representing the richest area in Italy perform better than the ones located in the Centre and South of Italy. In contrast with the Modigliani - Miller theorem financial variables could be significant and the specific sector within the agri – food market could play a relevant role. As the power concentration, we find that a strong property control (higher than 50%) or a fragmented concentration (lower than 25%) perform better. This result apparently could suggest that “hybrid” forms of concentrations could create bad functioning in the decision process. As our key variables representing the ownership structure we find that public limited companies and limited liability companies perform better than cooperatives. This is easily explainable by the fact that law establishes that cooperatives are less profit – oriented. Beyond cooperatives public limited companies perform better than limited liability companies and show a more stable path over time. Results are quite consistent when we consider both ROI and ROS as dependant variables. These results should not lead us to claim that public limited company is the “best” among all possible governance structures. First, every governance solution should be considered according to specific situations. Second more robustness analyses are needed to confirm our results. At this stage we deem these findings, the model set up and our approach represent original contributions that could stimulate fruitful future studies aimed at investigating the intriguing issue concerning the effect of ownership structure on the performance levels.
Resumo:
Der 'gestopfte Hochquarz' ß-Eukryptit (LiAlSiO4) ist bekannt für seine außergewöhnliche anisotrope Li-Ionenleitfähigkeit und die nahe Null liegende thermische Ausdehnung.Untersucht wurde die temperaturabhängige ß-Eukryptit-Phasenabfolge, insbesondere die modulierte Phase. Deren Satellitenreflexe sind gegenüber den normalen Reflexen erheblich verbreitert, überlappen miteinander sowie mit den dazwischen liegenden 'a-Reflexen' zu Tripletts. Für die Separation der Triplett-Intensitäten waren bisherige Standardverfahren zur Beugungsdatensammlung ungeeignet. Mit 'axialen q-Scans' wurde ein neuartiges Verfahren entwickelt. Intensitäten wurden mit dem neu-entwickelten least squares-Programm GKLS aus 2000 Profilen seriell und automatisch gewonnen und erfolgreich auf Standarddaten skaliert. Die Verwendung verbreiterter Reflexprofile erwies sich als zulässig.Die Verbreiterung wurde auf eine verminderte Fernordnung der Modulation von 11 bis 16 Perioden zurückgeführt (Analyse mit der Gitterfunktion), womit ein ungewöhnliches beugungswinkelabhängiges Verhalten der Reflexbreiten korrespondiert und mit typischen Antiphasendomänendurchmessern (andere Autoren) korreliert.Eine verminderte Si-/ Al-Ordnung wird als ursächlich für geringe Domänengrößen und Fernordnung angesehen, sowie für Eigenschaften wie z.B. a/c-Verhältnisse, Ausdehnungskoeffizienten, Ionenleitfähigkeit, Strukturtyp und Umwandlungstemperaturen. Änderungen des SiO2-Gehaltes, der Temperatur oder der Si- /Al-Ordnung zeitigen für einige Eigenschaften ähnliche Wirkungen.Die gemittelte Struktur der modulierten Phase wurde erstmals zuverlässig bestimmt, die Rolle des Li charakterisiert, Zweifel an der hexagonalen Symmetrie des ß-Eukryptits wurden ausgeräumt und die Bestimmung der modulierten Struktur wurde weitgehend vorbereitet.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
Die Röntgenabsorptionsspektroskopie (Extended X-ray absorption fine structure (EXAFS) spectroscopy) ist eine wichtige Methode zur Speziation von Schwermetallen in einem weiten Bereich von umweltrelevanten Systemen. Um Strukturparameter wie Koordinationszahl, Atomabstand und Debye-Waller Faktoren für die nächsten Nachbarn eines absorbierenden Atoms zu bestimmen, ist es für experimentelle EXAFS-Spektren üblich, unter Verwendung von Modellstrukturen einen „Least-Squares-Fit“ durchzuführen. Oft können verschiedene Modellstrukturen mit völlig unterschiedlicher chemischer Bedeutung die experimentellen EXAFS-Daten gleich gut beschreiben. Als gute Alternative zum konventionellen Kurven-Fit bietet sich das modifizierte Tikhonov-Regularisationsverfahren an. Ergänzend zur Tikhonov-Standardvariationsmethode enthält der in dieser Arbeit vorgestellte Algorithmus zwei weitere Schritte, nämlich die Anwendung des „Method of Separating Functionals“ und ein Iterationsverfahren mit Filtration im realen Raum. Um das modifizierte Tikhonov-Regularisationsverfahren zu testen und zu bestätigen wurden sowohl simulierte als auch experimentell gemessene EXAFS-Spektren einer kristallinen U(VI)-Verbindung mit bekannter Struktur, nämlich Soddyit (UO2)2SiO4 x 2H2O, untersucht. Die Leistungsfähigkeit dieser neuen Methode zur Auswertung von EXAFS-Spektren wird durch ihre Anwendung auf die Analyse von Proben mit unbekannter Struktur gezeigt, wie sie bei der Sorption von U(VI) bzw. von Pu(III)/Pu(IV) an Kaolinit auftreten. Ziel der Dissertation war es, die immer noch nicht voll ausgeschöpften Möglichkeiten des modifizierten Tikhonov-Regularisationsverfahrens für die Auswertung von EXAFS-Spektren aufzuzeigen. Die Ergebnisse lassen sich in zwei Kategorien einteilen. Die erste beinhaltet die Entwicklung des Tikhonov-Regularisationsverfahrens für die Analyse von EXAFS-Spektren von Mehrkomponentensystemen, insbesondere die Wahl bestimmter Regularisationsparameter und den Einfluss von Mehrfachstreuung, experimentell bedingtem Rauschen, etc. auf die Strukturparameter. Der zweite Teil beinhaltet die Speziation von sorbiertem U(VI) und Pu(III)/Pu(IV) an Kaolinit, basierend auf experimentellen EXAFS-Spektren, die mit Hilfe des modifizierten Tikhonov-Regularisationsverfahren ausgewertet und mit Hilfe konventioneller EXAFS-Analyse durch „Least-Squares-Fit“ bestätigt wurden.
Resumo:
A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
In this thesis I analyzed the microwave tomography method to recognize breast can- cer. I study how identify the dielectric permittivity, the Helmoltz equation parameter used to model the real physic problem. Through a non linear least squares method I solve a problem of parameters identification; I show the theoric approach and the devel- opment to reach the results. I use the Levenberg-Marquardt algorithm, applied on COMSOL software to multiphysic models; so I do numerical proofs on semplified test problems compared to the specific real problem to solve.
Resumo:
Objective This article seeks to explain the puzzle of why incumbents spend so much on campaigns despite most research finding that their spending has almost no effect on voters. Methods The article uses ordinary least squares, instrumental variables, and fixed-effects regression to estimate the impact of incumbent spending on election outcomes. The estimation includes an interaction term between incumbent and challenger spending to allow the effect of incumbent spending to depend on the level of challenger spending. Results The estimation provides strong evidence that spending by the incumbent has a larger positive impact on votes received the more money the challenger spends. Conclusion Campaign spending by incumbents is most valuable in the races where the incumbent faces a serious challenge. Raising large sums of money to be used in close races is thus a rational choice by incumbents.
Resumo:
Over the recent years chirped-pulse, Fourier-transform microwave (CP-FTMW) spectrometers have chan- ged the scope of rotational spectroscopy. The broad frequency and large dynamic range make possible structural determinations in molecular systems of increasingly larger size from measurements of heavy atom (13C, 15N, 18O) isotopes recorded in natural abundance in the same spectrum as that of the parent isotopic species. The design of a broadband spectrometer operating in the 2–8 GHz frequency range with further improvements in sensitivity is presented. The current CP-FTMW spectrometer performance is benchmarked in the analyses of the rotational spectrum of the water heptamer, (H2O)7, in both 2– 8 GHz and 6–18 GHz frequency ranges. Two isomers of the water heptamer have been observed in a pulsed supersonic molecular expansion. High level ab initio structural searches were performed to pro- vide plausible low-energy candidates which were directly compared with accurate structures provided from broadband rotational spectra. The full substitution structure of the most stable species has been obtained through the analysis of all possible singly-substituted isotopologues (H218O and HDO), and a least-squares rm(1) geometry of the oxygen framework determined from 16 different isotopic species compares with the calculated O–O equilibrium distances at the 0.01 Å level.
Resumo:
Over the recent years chirped-pulse, Fourier-transform microwave (CP-FTMW) spectrometers have changed the scope of rotational spectroscopy. The broad frequency and large dynamic range make possible structural determinations in molecular systems of increasingly larger size from measurements of heavy atom (C-13, N-15, O-18) isotopes recorded in natural abundance in the same spectrum as that of the parent isotopic species. The design of a broadband spectrometer operating in the 2-8 GHz frequency range with further improvements in sensitivity is presented. The current CP-FTMW spectrometer performance is benchmarked in the analyses of the rotational spectrum of the water heptamer, (H2O)(7), in both 2-8 GHz and 6-18 GHz frequency ranges. Two isomers of the water heptamer have been observed in a pulsed supersonic molecular expansion. High level ab initio structural searches were performed to provide plausible low-energy candidates which were directly compared with accurate structures provided from broadband rotational spectra. The full substitution structure of the most stable species has been obtained through the analysis of all possible singly-substituted isotopologues ((H2O)-O-18 and HDO), and a least-squares r(m)((1)) geometry of the oxygen framework determined from 16 different isotopic species compares with the calculated O-O equilibrium distances at the 0.01 angstrom level. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This thesis examines two panel data sets of 48 states from 1981 to 2009 and utilizes ordinary least squares (OLS) and fixed effects models to explore the relationship between rural Interstate speed limits and fatality rates and whether rural Interstate speed limits affect non-Interstate safety. Models provide evidence that rural Interstate speed limits higher than 55 MPH lead to higher fatality rates on rural Interstates though this effect is somewhat tempered by reductions in fatality rates for roads other than rural Interstates. These results provide some but not unanimous support for the traffic diversion hypothesis that rural Interstate speed limit increases lead to decreases in fatality rates of other roads. To the author’s knowledge, this paper is the first econometric study to differentiate between the effects of 70 MPH speed limits and speed limits above 70 MPH on fatality rates using a multi-state data set. Considering both rural Interstates and other roads, rural Interstate speed limit increases above 55 MPH are responsible for 39,700 net fatalities, 4.1 percent of total fatalities from 1987, the year limits were first raised, to 2009.