891 resultados para Minimization
Resumo:
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.
Resumo:
Amalgamersatz:Neue Wege zur Herstellung von Dentalkompositen mit geringem Polymerisationsschrumpf auf (Meth-)Acrylat-Basis Aufgrund der ästhetischen und gesundheitlichen Bedenken wird seit Jahrzehnten nach einer Alternative für Amalgam als Zahnfüllmaterial gesucht. Der größte Nachteil von organischen Monomeren liegt in der Volumenkontraktion während der Aushärtung, welche sich negativ auf die Materialeigenschaften auswirkt. Aus diesem Grund war das Hauptziel dieser Arbeit, eine Minimierung des Schrumpfes bei der radikalischen Polymerisation zu erreichen. Dazu wurden verschiedene, zum Teil neue, (Meth-)Acrylate synthetisiert und auf ihre Einsetzbarkeit als Bestandteil von Dentalkompositen geprüft.Um die Volumenkontraktion während der Polymerisation zu minimieren, wurde die Beweglichkeit der polymerisierbaren Gruppe eingeschränkt. Im ersten Teil der Arbeit wurden dazu flüssigkristalline Substanzen eingesetzt. Durch Mischen von flüssigkristallinen Diacrylaten konnte eine Mesophase im gewünschten Temperaturintervall von 25 bis 35 °C erhalten werden. Der Einsatz dieser Flüssigkristalle zeigte einen positiven Einfluss auf den Polymerisationsschrumpf. Zudem wurden neue Monomere synthetisiert, deren Methacrylgruppe in direkter Nachbarschaft zum Mesogen angebunden wurde, um die Stabilität der erhaltenen Polymere zu erhöhen.Im zweiten Teil der Arbeit wurde die Beweglichkeit der polymerisierbaren Gruppe durch eine Fixierung an einem starren Kern reduziert. Als Grundkörper wurden Polyphenole, enzymatisch polymerisierte Phenole und ßCyclodextrin verwendet. Bei den modifizierten Polyphenolen auf Basis von Gallussäure und 3,5-Dihydroxybenzoesäure konnte eine leichte Reduzierung des Polymerisationsschrumpfes erreicht werden. Mit HRP (Horseradish Peroxidase) katalysierten enzymatisch polymerisierten Phenole konnte dagegen nicht photochemisch vernetzt werden, da diese Oligomere in Lösung gefärbt vorlagen. Zudem zeigten die freien, phenolischen Hydroxygruppen eine sehr geringe Reaktivität. Die besten Ergebnisse wurden mit modifizierten ßCyclodextrinen als Komponente einer Komposite erreicht. Dabei wurde in einem Fall sogar eine leichte Volumenexpansion während der Polymerisation erzielt.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
This thesis regards the Wireless Sensor Network (WSN), as one of the most important technologies for the twenty-first century and the implementation of different packet correcting erasure codes to cope with the ”bursty” nature of the transmission channel and the possibility of packet losses during the transmission. The limited battery capacity of each sensor node makes the minimization of the power consumption one of the primary concerns in WSN. Considering also the fact that in each sensor node the communication is considerably more expensive than computation, this motivates the core idea to invest computation within the network whenever possible to safe on communication costs. The goal of the research was to evaluate a parameter, for example the Packet Erasure Ratio (PER), that permit to verify the functionality and the behavior of the created network, validate the theoretical expectations and evaluate the convenience of introducing the recovery packet techniques using different types of packet erasure codes in different types of networks. Thus, considering all the constrains of energy consumption in WSN, the topic of this thesis is to try to minimize it by introducing encoding/decoding algorithms in the transmission chain in order to prevent the retransmission of the erased packets through the Packet Erasure Channel and save the energy used for each retransmitted packet. In this way it is possible extend the lifetime of entire network.
Resumo:
Über die Sekundärstruktur von LI-Cadherin ist bislang wenig bekannt. Es gibt keine Röntgenanalysen und keine NMR-spektroskopische Untersuchungen. Man kann nur aufgrund der Sequenzhomologien zu den bereits untersuchten klassischen Cadherinen vermuten, daß im LI-Cadherin ähnliche Verhältnisse in der entscheidenden Wechselwirkungsdomäne vorliegen. In Analogie zum E-Cadherin wurde angenommen, daß es im LI-Cadherin eine „homophile Erkennungsregion“ gibt, die in einer typischen beta-Turn-Struktur mit anschließenden Faltblattbereichen vorliegen sollte. Um den Einfluß verschiedener Saccharid-Antigene auf die Turn-Bildung zu untersuchen, wurden im ersten Teil der vorliegenden Arbeit verschiedene Saccharid-Antigen-Bausteine synthetisiert, mit denen dann im zweiten Teil der Arbeit durch sequentielle Festphasensynthese entsprechende Glycopeptidstrukturen aus dieser Region des LI-Cadherins aufgebaut wurden. Zur Synthese sämtlicher Antigen-Bausteine ging man von D-Galactose aus, die über das Galactal und eine Azidonitratisierung in vier Stufen zum Azidobromid umgesetzt wurde. In einer Koenigs-Knorr-Glycosylierung wurde dieses dann auf die Seitenkette eines geschützten Serin-Derivats übertragen. Reduktion und Schutzgruppenmanipulationen lieferten den TN Antigen-Baustein. Ein TN-Antigen-Derivat war Ausgangspunkt für die Synthesen der weiteren Glycosyl-Serin-Bausteine. So ließ sich mittels der Helferich-Glycosylierung der T Antigen-Baustein herstellen, und der STN-Antigen-Baustein wurde durch eine Sialylierungsreaktion und weitere Schutzgruppenmanipulationen erhalten. Da die Route über das T-Antigen-Derivat den Hauptsyntheseweg für die weiteren komplexeren Antigene bildete, wurden verschiedene Schutzgruppenmuster getestet. Darauf aufbauend ließen sich durch verschiede Glycosylierungsreaktionen und Schutzgruppenmanipulationen der komplexe (2->6)-ST-Antigen-Baustein, (2->3)-Sialyl-T- und Glycophorin-Antigen-Baustein synthetisieren. Im nächsten Abschnitt der Doktorarbeit wurden die synthetisierten Saccharid-Antigen-Serin-Konjugate in Festphasen-Glycopeptidsynthesen eingesetzt. Zunächst wurde ein mit dem TN Antigen glycosyliertes Tricosapeptid hergestellt. Mittels NMR-spektroskopischen Untersuchungen und folgenden Energieminimierungsberechnungen konnte eine dreidimensionale Struktur ermittelt werden. Die Peptidsequenz des Turn-bildenden Bereichs wurde für die folgenden Synthesen gewählt. Die Abfolge der einzelnen Reaktionsschritte für die Festphasensynthesen mit den verschiedenen Saccharid-Antigen-Bausteinen war ähnlich. Insgesamt verlief die Festphasen-Glycopeptidsynthese in starker Abhängigkeit vom sterischen Anspruch der Saccharid-Bausteine. Sämtliche so synthetisierten Glycopeptide wurden NMR spektroskopisch charakterisiert und mittels NOE-Experimenten hinsichtlich ihrer Konformation untersucht. Durch diese Bestimmung der räumlichen Protonen-Protonen-Kontakte konnte mittels Rechnungen zur Energieminimierung, basierend auf MM2 Kraftfeldern, eine dreidimensionale Struktur für die Glycopeptide postuliert werden. Sämtliche synthetisierten Glycopeptide weisen eine schleifenartige Konformation auf. Der Einfluß der Saccharid-Antigene ist unterschiedlich, und läßt sich in drei Gruppen einteilen.
Resumo:
Mit dem HFz-Hubbardmodell meinen wir das übliche Hubbardmodell für alle Dimensionen mit der Einschränkung auf die Menge der Slaterdeterminanten, wobei allerdings die Spinwellen dieser Slaterdeterminanten nur Bewegung parallel zur z-Richtung haben. Für dieses Modell mit kleiner Füllung ist es uns gelungen zu beweisen, dass der Ferromagnetismus für große, aber endliche Kopplung entsteht.
Resumo:
Flory-Huggins interaction parameters and thermal diffusion coefficients were measured for aqueous biopolymer solutions. Dextran (a water soluble polysaccharide) and bovine serum albumin (BSA, a water soluble protein) were used for this study. The former polymer is representative for chain macromolecules and the latter is for globular macromolecules. The interaction parameters for the systems water/dextran and water/BSA were determined as a function of composition by means of vapor pressure measurements, using a combination of headspace sampling and gas chromatography (HS-GC). A new theoretical approach, accounting for chain connectivity and conformational variability, describes the observed dependencies quantitatively for the system water/dextran and qualitatively for the system water/BSA. The phase diagrams of the ternary systems water/methanol/dextran and water/dextran/BSA were determined via cloud point measurements and modeled by means of the direct minimization of the Gibbs energy using the information on the binary subsystems as input parameters. The thermal diffusion of dextran was studied for aqueous solutions in the temperature range 15 < T < 55 oC. The effects of the addition of urea were also studied. In the absence of urea, the Soret coefficient ST changes its sign as T is varied; it is positive for T > 45.0 oC, but negative for T < 45.0 oC. The positive sign of ST means that the dextran molecules migrate towards the cold side of the fluid; this behavior is typical for polymer solutions. While a negative sign indicates the macromolecules move toward the hot side; this behavior has so far not been observed with any other binary aqueous polymer solutions. The addition of urea to the aqueous solution of dextran increases ST and reduces the inversion temperature. For 2 M urea, the change in the sign of ST is observed at T = 29.7 oC. At higher temperature ST is always positive in the studied temperature range. To rationalize these observations it is assumed that the addition of urea opens hydrogen bonds, similar to that induced by an increase in temperature. For a future extension of the thermodynamic studies to the effects of poly-dispersity, dextran was fractionated by means of a recently developed technique called Continuous Spin Fractionation (CSF). The solvent/precipitant/polymer system used for the thermodynamic studies served as the basis for the fractionation of dextran The starting polymer had a weight average molar mass Mw = 11.1 kg/mol and a molecular non-uniformity U= Mw / Mn -1= 1.0. Seventy grams of dextran were fractionated using water as the solvent and methanol as the precipitant. Five fractionation steps yielded four samples with Mw values between 4.36 and 18.2 kg/mol and U values ranging from 0.28 to 0.48.
Resumo:
This thesis proposes a solution for board cutting in the wood industry with the aim of usage minimization and machine productivity. The problem is dealt with as a Two-Dimensional Cutting Stock Problem and specific Combinatorial Optimization methods are used to solve it considering the features of the real problem.
Resumo:
One of the most interesting challenge of the next years will be the Air Space Systems automation. This process will involve different aspects as the Air Traffic Management, the Aircrafts and Airport Operations and the Guidance and Navigation Systems. The use of UAS (Uninhabited Aerial System) for civil mission will be one of the most important steps in this automation process. In civil air space, Air Traffic Controllers (ATC) manage the air traffic ensuring that a minimum separation between the controlled aircrafts is always provided. For this purpose ATCs use several operative avoidance techniques like holding patterns or rerouting. The use of UAS in these context will require the definition of strategies for a common management of piloted and piloted air traffic that allow the UAS to self separate. As a first employment in civil air space we consider a UAS surveillance mission that consists in departing from a ground base, taking pictures over a set of mission targets and coming back to the same ground base. During all mission a set of piloted aircrafts fly in the same airspace and thus the UAS has to self separate using the ATC avoidance as anticipated. We consider two objective, the first consists in the minimization of the air traffic impact over the mission, the second consists in the minimization of the impact of the mission over the air traffic. A particular version of the well known Travelling Salesman Problem (TSP) called Time-Dependant-TSP has been studied to deal with traffic problems in big urban areas. Its basic idea consists in a cost of the route between two clients depending on the period of the day in which it is crossed. Our thesis supports that such idea can be applied to the air traffic too using a convenient time horizon compatible with aircrafts operations. The cost of a UAS sub-route will depend on the air traffic that it will meet starting such route in a specific moment and consequently on the avoidance maneuver that it will use to avoid that conflict. The conflict avoidance is a topic that has been hardly developed in past years using different approaches. In this thesis we purpose a new approach based on the use of ATC operative techniques that makes it possible both to model the UAS problem using a TDTSP framework both to use an Air Traffic Management perspective. Starting from this kind of mission, the problem of the UAS insertion in civil air space is formalized as the UAS Routing Problem (URP). For this reason we introduce a new structure called Conflict Graph that makes it possible to model the avoidance maneuvers and to define the arc cost function of the departing time. Two Integer Linear Programming formulations of the problem are proposed. The first is based on a TDTSP formulation that, unfortunately, is weaker then the TSP formulation. Thus a new formulation based on a TSP variation that uses specific penalty to model the holdings is proposed. Different algorithms are presented: exact algorithms, simple heuristics used as Upper Bounds on the number of time steps used, and metaheuristic algorithms as Genetic Algorithm and Simulated Annealing. Finally an air traffic scenario has been simulated using real air traffic data in order to test our algorithms. Graphic Tools have been used to represent the Milano Linate air space and its air traffic during different days. Such data have been provided by ENAV S.p.A (Italian Agency for Air Navigation Services).
Resumo:
In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus einem Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Gleichung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrödinger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells berücksichtigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Gleichung mit Fokker-Planck Kollisions-Operator hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Maxwellians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Geschwindigkeit. Dadurch bleibt die Gleichspannungs-Kurve für die Resonanz-Tunneldiode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Geschwindigkeits-Term die Lösung des Systems stabilisiert.
Resumo:
The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
La presente tesi di dottorato ha come argomento la produzione d’idrogeno per via fermentativa sfruttando il metabolismo anaerobico di particolari batteri estremofili del genere Thermotoga. In questo lavoro, svolto in seno al progetto Bio-Hydro, sfruttando reattori batch da 116 mL, è stato selezionato il ceppo migliore di Thermotoga fra i quatto ceppi testati: T. neapolitana. Una volta individuato il candidato batterico migliore è stato individuato il valore ottimale di pH (8.5 a t.amb) per la produzione d’idrogeno. Un intenso lavoro è stato svolto sul medium di coltura permettendone la minimizzazione e rendendolo così economicamente sostenibile per il suo utilizzo nel reattore da 19L; in questo caso il glucosio è stato completamente sostituito con due sottoprodotti agroindustriali individuati in precedenza, il melasso di barbabietola e il siero di latte. Sono stati poi eliminati i gravosi micronutrienti e le vitamine. È stata sfruttata la capacità di T. neapolitana di produrre biofilm e sono stati testati 4 diversi supporti in vetro sinterizzato e ceramici, tali test hanno permesso di individuare Biomax come supporto migliore. Sono stati svolti studi sul metabolismo di T. neapolitana volti ad individuare le concentrazioni inibenti di ogni substrato testato, l’inibizione da prodotto (idrogeno) e l’inibizione da ossigeno. Tutte queste prove hanno dato le conoscenze di base per la conduzione di esperienze su reattore da 19L. L’innovativo reattore di tipo SPCSTR è stato interamente studiato, progettato e costruito presso il DICMA. dell’Università di Bologna. La conduzione di esperienze batch su SPCSTR ha dato la possibilità di verificare il funzionamento del nuovo tipo d’impianto. Presso il Wageningen UR (NL), è stata svolta la selezione del miglior ceppo di Caldicellulosisruptor fra 3 testati e del miglior supporto per la produzione d’idrogeno; è stato poi costruito testato e condotto in continuo l’innovativo reattore CMTB.
Resumo:
The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.
Resumo:
L’obiettivo del lavoro consiste nell’implementare una metodologia operativa volta alla progettazione di reti di monitoraggio e di campagne di misura della qualità dell’aria con l’utilizzo del laboratorio mobile, ottimizzando le posizioni dei dispositivi di campionamento rispetto a differenti obiettivi e criteri di scelta. La revisione e l’analisi degli approcci e delle indicazioni fornite dalla normativa di riferimento e dai diversi autori di lavori scientifici ha permesso di proporre un approccio metodologico costituito da due fasi operative principali, che è stato applicato ad un caso studio rappresentato dal territorio della provincia di Ravenna. La metodologia implementata prevede l’integrazione di numerosi strumenti di supporto alla valutazione dello stato di qualità dell’aria e degli effetti che gli inquinanti atmosferici possono generare su specifici recettori sensibili (popolazione residente, vegetazione, beni materiali). In particolare, la metodologia integra approcci di disaggregazione degli inventari delle emissioni attraverso l’utilizzo di variabili proxy, strumenti modellistici per la simulazione della dispersione degli inquinanti in atmosfera ed algoritmi di allocazione degli strumenti di monitoraggio attraverso la massimizzazione (o minimizzazione) di specifiche funzioni obiettivo. La procedura di allocazione sviluppata è stata automatizzata attraverso lo sviluppo di un software che, mediante un’interfaccia grafica di interrogazione, consente di identificare delle aree ottimali per realizzare le diverse campagne di monitoraggio