13 resultados para finite time blow-up
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.
Resumo:
Let’s put ourselves in the shoes of an energy company. Our fleet of electricity production plants mainly includes gas, hydroelectric and waste-to-energy plants. We also sold contracts for the supply of gas and electricity. For each year we have to plan the trading of the volumes needed by the plants and customers: better to fix the price of these volumes in advance with the so-called forward contracts, instead of waiting for the delivery months, exposing ourselves to price uncertainty. Here’s the thing: trying to keep uncertainty under control in a market that has never shown such extreme scenarios as in recent years: a pandemic, a worsening climate crisis and a war that is affecting economies around the world have made the energy market more volatile than ever. How to make decisions in such uncertain contexts? There is an optimization problem: given a year, we need to choose the optimal planning of volume trading times, to meet the needs of our portfolio at the best prices, taking into account the liquidity constraints given by the market and the risk constraints imposed by the company. Algorithms are needed for the generation of market scenarios over a finite time horizon, that is, a probabilistic distribution that allows a view of all the dates between now and the end of the year of interest. Algorithms are needed to solve the optimization problem: we have proposed more than one and compared them; a very simple one, which avoids considering part of the complexity, moving on to a scenario approach and finally a reinforcement learning approach.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
Introduction. Craniopharyngioma (CF) is a malformation of the hypothalamicpituitary region and it is the most common nonglial cerebral tumor in children with an high overall survival rate. In some case severe endocrinologic and metabolic sequelae may occur during follow up. 50% of patients (pts), in particular those with radical removal of suprasellar lesions, develop intractable hyperphagia and morbid obesity, with dyslypidemia and high cardiovascular risk. We studied the auxological and metabolic features of a series of 29 patients (18 males) treated at a mean age of 7,6 years, followed up in our Centre from 1973 to 2008 with a mean follow up of 8,3 years. Patients features at the onset. 62% of pts showed as first symptoms of disease visual impairment and neurological disturbancies (headache); 34% growth arrest; 24% signs of raised intracranial pressure and 7% diabetes insipidus. Diagnosis. Diagnosis of CF was reached finally by TC or MRI scans which showed endo-suprasellar lesion in 23 cases and endosellar tumour in 6 cases. Treatment and outcome. 25/29 pts underwent surgical removal of CF (19 by transcranial approach and 6 by endoscopic surgery); 4 pts underwent stereotactic surgery as first line therapy. 3 pts underwent local irradiation with yttrium-90, 5 pts post surgery radiotherapy. 45% of pts needed more than one treatment procedure. Results. After CF treatment all patients suffered from 3 or more pituitary hormone deficiencies and diabetes insipidus. They underwent promptly substitutive therapy with corticosteroids, l-thyroxine and desmopressin. In 28/29 pts we found growth hormone (GH) deficiency. 20/28 pts started GH substitutive therapy and 15 pts reached final height(FH) near target height(TH). 8 pts were not GH treated for good growth velocity, even without GH, or for tumour residual. They reached in 2 cases FH over TH showing the already known phenomenon of growth without GH. 38% of patients showed BMI SDS >2 SDS at last assessment, in particular pts not GH treated (BMI 2,5 SDS) are more obese than GH treated (BMI 1,2 SDS). Lipid panel of 16 examined pts showed significative differencies among GH treated (9 pts) and not treated (7 pts) with better profile in GH treated ones for Total Cholesterol/C-HDL and C-LDL/C-HDL. We examined intima media thickness of common carotid arteries in 11 pts. 3/4 not GH treated pts showed ultrasonographic abnormalities: calcifications in 2 and plaque in 1 case. Of them 1 pt was only 12,6 years old and already showed hypothalamic obesity with hyperphagia, high HOMA index and dyslipidemia. In the GH treated group (7) we found calcifications in 1 case and a plaque in another one. GH therapy was started in the young pt with carotid calcifications, with good improvement within 6 months of treatment. 5/29 pts showed hypothalamic obesity, related to hypothalamic damage (type of surgical treatment, endo-suprasellar primitive lesion, recurrences). 48% of patients recurred during follow up ( mean time from treatment: 3 years) and underwent, in some cases up to 4 transcranial surgical treatments. GH seems not to increase recurrence rate since 40% of GH treated recurred vs 66,6% of not GH treated pts. Discussion. Our data show the extereme difficulties that occur during follow up of craniopharyngioma treated patients. GH therapy should be offered to all patients even with good growth velocity after CF treatment, to avoid dislypidemia and reduce cardiovascular risk. The optimal therapy is not completely understood and whether gross tumor removal or partial surgery is the best option remains to be decided only on one patient tumour features and hypothalamic involvement. In conclusion the gold standard treatment of CF remains complete tumour removal, when feasible, or partial resection to preserve hypothalamic function in endosuprasellar large neoplasms.
Resumo:
The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).
Resumo:
La dermoscopia, metodica non invasiva, di pratico utilizzo e a basso costo si è affermata negli ultimi anni come valido strumento per la diagnosi e il follow up delle lesioni cutanee pigmentate e non pigmentate. La presente ricerca è stata incentrata sullo studio dermoscopico dei nevi melanocitici a localizzazione palmo-plantare, acquisiti e congeniti, in età pediatrica: a questo scopo sono state analizzate le immagini dei nevi melanocitici acrali nei pazienti visitati c/o l’ambulatorio di Dermatologia Pediatrica del Policlinico Sant’Orsola Malpighi dal 2004 al 2011 per definire i principali pattern dermoscopici rilevati ed i cambiamenti osservati durante il follow up videodermatoscopico. Nella nostra casistica di immagini dermoscopiche pediatriche abbiamo notato un cambiamento rilevante (inteso come ogni modificazione rilevata tra il pattern demoscopico osservato al baseline e i successivi follow up) nell’88,6% dei pazienti ed in particolare abbiamo osservato come in un’alta percentuale di pazienti (80%), si sia verificato un vero e proprio impallidimento del nevo melanocitico e in un paziente è stata evidenziata totale regressione dopo un periodo di tempo di 36 mesi. E’ stato interessante notare come l’impallidimento della lesione melanocitaria si sia verificata per lo più in sedi sottoposte ad una sollecitazione meccanica cronica, come la pianta del piede e le dita (di mani e piedi), facendoci ipotizzare un ruolo del traumatismo cronico nelle modificazioni che avvengono nelle neoformazioni melanocitarie dei bambini in questa sede.
Resumo:
In this thesis, a strategy to model the behavior of fluids and their interaction with deformable bodies is proposed. The fluid domain is modeled by using the lattice Boltzmann method, thus analyzing the fluid dynamics by a mesoscopic point of view. It has been proved that the solution provided by this method is equivalent to solve the Navier-Stokes equations for an incompressible flow with a second-order accuracy. Slender elastic structures idealized through beam finite elements are used. Large displacements are accounted for by using the corotational formulation. Structural dynamics is computed by using the Time Discontinuous Galerkin method. Therefore, two different solution procedures are used, one for the fluid domain and the other for the structural part, respectively. These two solvers need to communicate and to transfer each other several information, i.e. stresses, velocities, displacements. In order to guarantee a continuous, effective, and mutual exchange of information, a coupling strategy, consisting of three different algorithms, has been developed and numerically tested. In particular, the effectiveness of the three algorithms is shown in terms of interface energy artificially produced by the approximate fulfilling of compatibility and equilibrium conditions at the fluid-structure interface. The proposed coupled approach is used in order to solve different fluid-structure interaction problems, i.e. cantilever beams immersed in a viscous fluid, the impact of the hull of the ship on the marine free-surface, blood flow in a deformable vessels, and even flapping wings simulating the take-off of a butterfly. The good results achieved in each application highlight the effectiveness of the proposed methodology and of the C++ developed software to successfully approach several two-dimensional fluid-structure interaction problems.
Resumo:
Introduzione: la leishmaniosi canina (CanL) è una malattia infettiva, trasmessa da vettore e sostenuta da un protozoo, la Leishmania infantum. La CanL ha assunto sempre più importanza sia in medicina veterinaria che in medicina umana. La leishmaniosi è fortemente associata allo sviluppo di una nefropatia cronica. Disegno dello studio: studio di coorte retrospettivo. Obiettivo: individuare le alterazioni clinico-patologiche prevalenti al momento dell’ammissione e durante il follow-up del paziente, per identificare quelle con un valore prognostico maggiore. Materiali e metodi: 167 cani, per un totale di 187 casi trattati, con diagnosi sierologica e/o citologica di Leishmaniosi e dati ematobiochimici completi, elettroforesi sierica, analisi delle urine e biochimica urinaria comprensiva di proteinuria (UPC) ed albuminuria (UAC), profilo coagulativo (ATIII, d-Dimeri, Fibrinogeno) e marker d’infiammazione (CRP). Dei pazienti inclusi è stato seguito il follow-up clinico e clinicopatologico per un periodo di tempo di due anni e sono stati considerati. Risultati: Le alterazione clinicopatologiche principali sono state anemia (41%), iperprotidemia (42%), iperglobulinemia (75%), ipoalbuminemia (66%), aumento della CRP (57%), incremento dell’UAC (78%), aumento dell’UPC (70%), peso specifico inadeguato (54%) e riduzione dell’ATIII (52%). Il 37% dei pazienti non era proteinurico e di questi il 27% aveva già un’albuminuria patologica. Il 38% dei pazienti aveva una proteinuria nefrosica (UPC>2,5) e il 22% era iperazotemico. I parametri clinicopatologici hanno mostrato una tendenza a rientrare nella normalità dopo il 90° giorno di follow-up. La creatinina sierica, tramite un analisi multivariata, è risultata essere il parametro correlato maggiormente con l’outcome del paziente. Conclusione: i risultati ottenuti in funzione dell’outcome dei pazienti hanno mostrato che i soggetti deceduti durante il follow-up, al momento dell’ammissione avevano valori di creatinina, UPC e UAC più elevati e ingravescenti. Inoltre l’UAC può venire considerato un marker precoce di nefropatia e la presenza di iperazotemia all’ammissione, in questi pazienti, ha un valore prognostico negativo.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.