11 resultados para Unified Model Reference

em AMS Tesi di Dottorato - Alm@DL - Universit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La ricerca affronta la questione della punizione nella prospettiva del diritto costituzionale nazionale integrata con quella del diritto europeo dei diritti dell’uomo. Nella Parte I è sostenuta la tesi secondo cui la trasformazione della Costituzione penale avviata sotto l’influsso della giurisprudenza CEDU rappresenta complessivamente un avanzamento nel processo di costituzionalizzazione del potere punitivo. Questa conclusione è supportata attraverso un confronto della filosofia costituzionale classica sulla punizione con i diversi approcci interpretativi alla Costituzione penale sviluppati durante il XX secolo (approcci tradizionale, costituzionalistico ed EDU). Nella Parte II è invece sostenuta la tesi secondo cui, nonostante gli effetti positivi dell’armonizzazione sovranazionale, lo statuto costituzionale della punizione dovrebbe comunque rimanere formalmente autonomo dal diritto EDU. Non solo, infatti, nessun paradigma dei rapporti interordinamentali finora sviluppato può giustificarne un’integrazione totale, ma essa rischierebbe anche di diminuire la normatività dell’aspetto sociale della Costituzione penale, già ipocostituzionalizzato rispetto a quello liberale. Nella Conclusione sono quindi sviluppati gli elementi fondamentali di un approccio interpretativo alternativo alla Costituzione penale che risponda meglio di quelli esistenti alle esigenze sia di garantire la massima costituzionalizzazione della punizione sia di facilitare l’integrazione sovranazionale. In base a un simile approccio costituzionalmente fondato, sostanzialista, rights-based e inclusivo di tutte le ideologie costituenti, la Costituzione potrebbe essere letta nel senso di prevedere un modello di disciplina unitario per tutte le forme di esercizio del potere punitivo (salvo quello disciplinare, distinguibile sotto l’aspetto istituzionale) caratterizzato da: una riserva di legge a intensità variabile; uno scrutinio stretto della Corte sulla giustificabilità costituzionale della pena; l’estensione dell’ambito di applicazione dei principi di colpevolezza e rieducazione; un pieno sviluppo degli aspetti di garanzia collettiva dei classici principi costituzionalpenalistici (obblighi di tutela penale e garanzia dell’effettiva collocazione della pena in capo al soggetto colpevole), nonché derivabili dall’art. 3 Cost. (proporzionalità della pena alle condizioni materiali del soggetto punito).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this thesis is to discuss and develop the Unified Patent Court project to account for the role it could play in implementing judicial specialisation in the Intellectual Property field. To provide an original contribution to the existing literature on the topic, this work addresses the issue of how the Unified Patent Court could relate to the other forms of judicial specialisation already operating in the European Union context. This study presents a systematic assessment of the not-yet-operational Unified Patent Court within the EU judicial system, which has recently shown a trend towards being developed outside the institutional framework of the European Union Court of Justice. The objective is to understand to what extent the planned implementation of the Unified Patent Court could succeed in responding to the need for specialisation and in being compliant with the EU legal and constitutional framework. Using the Unified Patent Court as a case study, it is argued that specialised courts in the field of Intellectual Property have a significant role to play in the European judicial system and offer an adequate response to the growing complexity of business operations and relations. The significance of this study is to analyse whether the UPC can still be considered as an appropriate solution to unify the European patent litigation system. The research considers the significant deficiencies, which risks having a negative effect on the European Union institutional procedures. In this perspective, this work aims to make a contribution in identifying the potential negative consequences of this reform. It also focuses on considering different alternatives for a European patent system, which could effectively promote innovation in Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims: Sorafenib is the reference therapy for advanced Hepatocellular Carcinoma (HCC). No method exists to predict in the very early period subsequent individual response. Starting from the clinical experience in humans that subcutaneous metastases may rapidly change consistency under sorafenib and that elastosonography a new ultrasound based technique allows assessment of tissue stiffness, we investigated the role of elastonography in the very early prediction of tumor response to sorafenib in a HCC animal model. Methods: HCC (Huh7 cells) subcutaneous xenografting in mice was utilized. Mice were randomized to vehicle or treatment with sorafenib when tumor size was 5-10 mm. Elastosonography (Mylab 70XVG, Esaote, Genova, Italy) of the whole tumor mass on a sagittal plane with a 10 MHz linear transducer was performed at different time points from treatment start (day 0, +2, +4, +7 and +14) until mice were sacrified (day +14), with the operator blind to treatment. In order to overcome variability in absolute elasticity measurement when assessing changes over time, values were expressed in arbitrary units as relative stiffness of the tumor tissue in comparison to the stiffness of a standard reference stand-off pad lying on the skin over the tumor. Results: Sor-treated mice showed a smaller tumor size increase at day +14 in comparison to vehicle-treated (tumor volume increase +192.76% vs +747.56%, p=0.06). Among Sor-treated tumors, 6 mice showed a better response to treatment than the other 4 (increase in volume +177% vs +553%, p=0.011). At day +2, median tumor elasticity increased in Sor-treated group (+6.69%, range –30.17-+58.51%), while decreased in the vehicle group (-3.19%, range –53.32-+37.94%) leading to a significant difference in absolute values (p=0.034). From this time point onward, elasticity decreased in both groups, with similar speed over time, not being statistically different anymore. In Sor-treated mice all 6 best responders at day 14 showed an increase in elasticity at day +2 (ranging from +3.30% to +58.51%) in comparison to baseline, whereas 3 of the 4 poorer responders showed a decrease. Interestingly, these 3 tumours showed elasticity values higher than responder tumours at day 0. Conclusions: Elastosonography appears a promising non-invasive new technique for the early prediction of HCC tumor response to sorafenib. Indeed, we proved that responder tumours are characterized by an early increase in elasticity. The possibility to distinguish a priori between responders and non responders based on the higher elasticity of the latter needs to be validated in ad-hoc experiments as well as a confirmation of our results in humans is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a universal model of documents and deltas. This model formalize what it means to find differences between documents and to shows a single shared formalization that can be used by any algorithm to describe the differences found between any kind of comparable documents. The main scientific contribution of this thesis is a universal delta model that can be used to represent the changes found by an algorithm. The main part of this model are the formal definition of changes (the pieces of information that records that something has changed), operations (the definitions of the kind of change that happened) and deltas (coherent summaries of what has changed between two documents). The fundamental mechanism tha makes the universal delta model a very expressive tool is the use of encapsulation relations between changes. In the universal delta model, changes are not always simple records of what has changed, they can also be combined into more complex changes that reflects the detection of more meaningful modifications. In addition to the main entities (i.e., changes, operations and deltas), the model describes and defines also documents and the concept of equivalence between documents. As a corollary to the model, there is also an extensible catalog of possible operations that algorithms can detect, used to create a common library of operations, and an UML serialization of the model, useful as a reference when implementing APIs that deal with deltas. The universal delta model presented in this thesis acts as the formal groundwork upon which algorithm can be based and libraries can be implemented. It removes the need to recreate a new delta model and terminology whenever a new algorithm is devised. It also alleviates the problems that toolmakers have when adapting their software to new diff algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research hypothesis of the thesis is that “an open participation in the co-creation of the services and environments, makes life easier for vulnerable groups”; assuming that the participatory and emancipatory approaches are processes of possible actions and changes aimed at facilitating people’s lives. The adoption of these approaches is put forward as the common denominator of social innovative practices that supporting inclusive processes allow a shift from a medical model to a civil and human rights approach to disability. The theoretical basis of this assumption finds support in many principles of Inclusive Education and the main focus of the hypothesis of research is on participation and emancipation as approaches aimed at facing emerging and existing problems related to inclusion. The framework of reference for the research is represented by the perspectives adopted by several international documents concerning policies and interventions to promote and support the leadership and participation of vulnerable groups. In the first part an in-depth analysis of the main academic publications on the central themes of the thesis has been carried out. After investigating the framework of reference, the analysis focuses on the main tools of participatory and emancipatory approaches, which are able to connect with the concepts of active citizenship and social innovation. In the second part two case studies concerning participatory and emancipatory approaches in the areas of concern are presented and analyzed as example of the improvement of inclusion, through the involvement and participation of persons with disability. The research has been developed using a holistic and interdisciplinary approach, aimed at providing a knowledge-base that fosters a shift from a situation of passivity and care towards a new scenario based on the person’s commitment in the elaboration of his/her own project of life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research project aims to study and develop control techniques for a generalized three-phase and multi-phase electric drive able to efficiently manage most of the drive types available for traction application. The generalized approach is expanded to both linear and non- linear machines in magnetic saturation region starting from experimental flux characterization and applying the general inductance definition. The algorithm is able to manage fragmented drives powered from different batteries or energy sources and will be able to ensure operability even in case of faults in parts of the system. The algorithm was tested using model-in-the-loop in software environment and then applied on experimental test benches with collaboration of an external company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long air gaps containing a floating conductor are common insulation types in power grids. During the transmission line live-line work, the process of lineman entering the transmission line air gap constitutes a live-line work combined air gap, which is a typical long air gap containing a floating conductor. This thesis investigates the discharge characteristics, the discharge mechanism and a discharge simulation model of long air gaps containing a floating conductor in order to address the engineering issues in live-line work. The innovative achievements of the thesis are as follows: (1) The effect of the gap distance, the floating electrode structure, the switching impulse wavefront time, the altitude, and the deviation of the floating conductor from the axis on the breakdown voltage was determined. (2) The physical process of the discharges in long air gaps containing a floating conductor was determined. The reason why the discharge characteristics of long air gaps containing a floating electrode with complex geometrics and sharp protrusions and long air gaps with a rod-shaped floating electrode are similar has been studied. The formation mechanism of the lowest breakdown voltage area of a long air gap containing a floating conductor is explained. (3) A simulation discharge model of long air gaps containing a floating conductor was established, which can describe the physical process and predict the breakdown voltage. The model can realize the accurate prediction of the breakdown voltage of typical long air gaps containing a floating conductor and live-line work combined air gaps in transmission lines. The findings of the study can provide theoretical reference and technical support for improving the safety of live-line work.