917 resultados para two-factor models
Resumo:
We study automorphisms of irreducible holomorphic symplectic (IHS) manifolds deformation equivalent to the O’Grady’s sixfold. We classify non-symplectic and symplectic automorphisms using lattice theoretic criterions related to the lattice structure of the second integral cohomology. Moreover we introduce the concept of induced automorphisms. There are two birational models for O'Grady's sixfolds, the first one introduced by O'Grady, which is the resolution of singularities of the Albanese fiber of a moduli space of sheaves on an abelian surface, the second one which concerns in the quotient of an Hilbert cube by a symplectic involution. We find criterions to know when an automorphism is induced with respect to these two different models, i.e. it comes from an automorphism of the abelian surface or of the Hilbert cube.
Resumo:
A large fraction of organ transplant recipients develop anti-donor antibodies (DSA), with accelerated graft loss and increased mortality. We tested the hypothesis that erythropoietin (EPO) reduces DSA formation by inhibiting T follicular helper (TFH) cells. We measured DSA levels, splenic TFH, TFR cells, germinal center (GC), and class switched B cells, in murine models of allogeneic sensitization, allogeneic transplantation and in parent-to-F1 models of graft versus host disease (GVHD). We quantified the same cell subsets and specific antibodies, upon EPO or vehicle treatment, in wild type mice and animals lacking EPO receptor selectively on T or B cells, immunized with T-independent or T-dependent stimuli. In vitro, we tested the EPO effect on TFH induction. We isolated TFH and TFR cells to perform in vitro assay and clarify their role. EPO reduced DSA levels, GC, class switched B cells, and increased the TFR/TFH ratio in the heart transplanted mice and in two GVHD models. EPO did also reduce TFH and GC B cells in SRBC-immunized mice, while had no effect in TNP-AECM-FICOLL-immunized animals, indicating that EPO inhibits GC B cells by targeting TFH cells. EPO effects were absent in T cells EPOR conditional KO mice, confirming that EPO affects TFH in vivo through EPOR. In vitro, EPO affected TFH induction through an EPO-EPOR-STAT5-dependent pathway. Suppression assay demonstrated that the reduction of IgG antibodies was dependent on TFH cells, sustaining the central role of the subset in this EPO-mediated mechanism. In conclusion, EPO prevents DSA formation in mice through a direct suppression of TFH. Development of DSA is associated with high risk of graft rejection, giving our data a strong rationale for studies testing the hypothesis that EPO administration prevents their formation in organ transplant recipients. Our findings provide a foundation for testing EPO as a treatment of antibody mediated disease processes.
Resumo:
The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
The first part of the thesis has been devoted to the transmission planning with high penetration of renewable energy sources. Both stationary and transportable battery energy storage (BES, BEST) systems have been considered in the planning model, so to obtain the optimal set of BES, BEST and transmission lines that minimizes the total cost in a power network. First, a coordinated expansion planning model with fixed transportation cost for BEST devices has been presented; then, the model has been extended to a planning formulation with a distance-dependent transportation cost for the BEST units, and its tractability has been proved through a case study based on a 190-bus test system. The second part of this thesis is then devoted to the analysis of planning and management of renewable energy communities (RECs). Initially, the planning of photovoltaic and BES systems in a REC with an incentive-based remuneration scheme according to the Italian regulatory framework has been analysed, and two planning models, according to a single-stage, or a multi-stage approach, have been proposed in order to provide the optimal set of BES and PV systems allowing to achieve the minimum energy procurement cost in a given REC. Further, the second part of this thesis is devoted to the study of the day-ahead scheduling of resources in renewable energy communities, by considering two types of REC. The first one, which we will refer to as “cooperative community”, allows direct energy transactions between members of the REC; the second type of REC considered, which we shall refer to as “incentive-based”, does not allow direct transactions between members but includes economic revenues for the community shared energy, according to the Italian regulation framework. Moreover, dispatchable renewable energy generation has been considered by including producers equipped with biogas power plants in the community.
Resumo:
La ricerca prende in esame la produzione della stampa periodica bibliografica italiana nel corso del Seicento e Settecento. Da un lato mira a ricostruirne il percorso storico attraverso la raccolta, la selezione e l’analisi delle principali testimonianze; dall’altro a indagarne le diverse forme e fisionomie assunte nel corso del tempo, nonché le modalità attraverso le quali fu somministrata la notitia librorum. A questo primo piano di indagine se ne è affiancato un secondo, per mezzo dell’elaborazione di due modelli descrittivi. Il primo è finalizzato alla raccolta delle principali generalità ed evidenze formali di una testata. Il secondo, invece, rappresenta un tentativo di spoglio e analisi approfondita dei contributi offerti da due campioni periodici presi come modelli di riferimento: La Galleria di Minerva, relativamente al biennio 1696-1697, e il Giornale della letteratura italiana (Mantova, 1793-1795). L’intento è quello di ricostruire, anche attraverso un processo di formulazione di keywords, le principali tematiche e i principali interessi emersi dalle esperienze menzionate. E mostrare, pertanto, il valore rappresentativo e identificativo del periodico bibliografico relativamente al contesto erudito di riferimento, nella sua veste di fonte informativa all’interno della quale si rispecchiarono le principali istanze scientifico-culturali del periodo.
Resumo:
Marketers continuously attempt to identify important attributes and innovate in order to understand how attribute performance could lead to customer satisfaction in the short term and in the long term. Understanding the impact of customer satisfaction may offer a competitive edge to companies. Researchers are discussing the importance of performance attributes in leading to satisfaction; however, there is no clear understanding of whether an attribute that leads to satisfaction at one time (e.g., short run) can cause it also in the long run, without excluding the possibility that it could lead to dissatisfaction and no satisfaction. The present research tries to understand anomalies related to asymmetric attribute performance and satisfaction over time with the help of Herzberg's (1967) Two-Factor Theory (TFT) and construal level theory (CLT). More precisely, there are main purposes of this dissertation. First, the present research tries to understand whether positive or negative hygiene attribute performance and motivator attribute factors exert different weights on overall customer satisfaction depending on the time elapsed from the service experience. Second, to test if positive or negative hygiene/motivator attribute performance affect to revisit intention and to word of mouth by considering mediating role of satisfaction. The results reveal that in the near past (NP) experience, the positive performance of hygiene concrete attributes creates a differential effect on overall satisfaction higher than the negative performance of hygiene concrete attributes. Results also confirmed mediating role of satisfaction in the relationship between attribute performance and revisit intention for near past condition but not for distant past. Likewise significant relationship was found for the mediating role of satisfaction in the relationship between attribute performance and word of mouth (WOM) for near past condition but not for distant past.
Resumo:
Cancers of unknown primary site (CUPs) are a rare group of metastatic tumours, with a frequency of 3-5%, with an overall survival of 6-10 month. The identification of tumour primary site is usually reached by a combination of diagnostic investigations and immunohistochemical testing of the tumour tissue. In CUP patients, these investigations are inconclusive. Since international guidelines for treatment are based on primary site indication, CUP treatment requires a blind approach. As a consequence, CUPs are usually empiric treated with poorly effective. In this study, we applied a set of microRNAs using EvaGreen-based Droplet Digital PCR in a retrospective and prospective collection of formalin-fixed paraffin-embedded tissue samples. We assessed miRNA expression of 155 samples including primary tumours (N=94), metastases of known origin (N=10) and metastases of unknown origin (N=50). Then, we applied the shrunken centroids predictive algorithm to obtain the CUP’s site(s)-of-origin. The molecular test was successfully applied to all CUP samples and provided a site-of-origin identification for all samples, potentially within a one-week time frame from sample inclusion. In the second part of the study we derived two CUP cell lines, and corresponding patient-derived xenografts (PDXs). CUP cell lines and PDXs underwent histological, molecular, and genomic characterization confirming the features of the original tumour. Tissues-of-origin prediction was obtained from the tumour microRNA expression profile and confirmed by single cell RNA sequencing. Genomic testing analysis identified FGFR2 amplification in both models. Drug-screening assays were performed to test the activity of FGFR2-targeting drug and the combination treatment with the MEK inhibitor trametinib, which proved to be synergic and exceptionally active, both in vitro and in vivo. In conclusion, our study demonstrated that miRNA expression profiling could be employed as diagnostic test. Then we successfully derived two CUP models from patients, used for therapy tests, bringing personalized therapy closer to CUP patients.
Resumo:
The established isotropic tomographic models show the features of subduction zones in terms of seismic velocity anomalies, but they are generally subjected to the generation of artifacts due to the lack of anisotropy in forward modelling. There is evidence for the significant influence of seismic anisotropy in the mid-upper mantle, especially for boundary layers like subducting slabs. As consequence, in isotropic models artifacts may be misinterpreted as compositional or thermal heterogeneities. In this thesis project the application of a trans-dimensional Metropolis-Hastings method is investigated in the context of anisotropic seismic tomography. This choice arises as a response to the important limitations introduced by traditional inversion methods which use iterative procedures of optimization of a function object of the inversion. On the basis of a first implementation of the Bayesian sampling algorithm, the code is tested with some cartesian two-dimensional models, and then extended to polar coordinates and dimensions typical of subduction zones, the main focus proposed for this method. Synthetic experiments with increasing complexity are realized to test the performance of the method and the precautions for multiple contexts, taking into account also the possibility to apply seismic ray-tracing iteratively. The code developed is tested mainly for 2D inversions, future extensions will allow the anisotropic inversion of seismological data to provide more realistic imaging of real subduction zones, less subjected to generation of artifacts.
Resumo:
The emissions estimation, both during homologation and standard driving, is one of the new challenges that automotive industries have to face. The new European and American regulation will allow a lower and lower quantity of Carbon Monoxide emission and will require that all the vehicles have to be able to monitor their own pollutants production. Since numerical models are too computationally expensive and approximated, new solutions based on Machine Learning are replacing standard techniques. In this project we considered a real V12 Internal Combustion Engine to propose a novel approach pushing Random Forests to generate meaningful prediction also in extreme cases (extrapolation, very high frequency peaks, noisy instrumentation etc.). The present work proposes also a data preprocessing pipeline for strongly unbalanced datasets and a reinterpretation of the regression problem as a classification problem in a logarithmic quantized domain. Results have been evaluated for two different models representing a pure interpolation scenario (more standard) and an extrapolation scenario, to test the out of bounds robustness of the model. The employed metrics take into account different aspects which can affect the homologation procedure, so the final analysis will focus on combining all the specific performances together to obtain the overall conclusions.
Resumo:
The HR Del nova remnant was observed with the IFU-GMOS at Gemini North. The spatially resolved spectral data cube was used in the kinematic, morphological, and abundance analysis of the ejecta. The line maps show a very clumpy shell with two main symmetric structures. The first one is the outer part of the shell seen in H alpha, which forms two rings projected in the sky plane. These ring structures correspond to a closed hourglass shape, first proposed by Harman & O'Brien. The equatorial emission enhancement is caused by the superimposed hourglass structures in the line of sight. The second structure seen only in the [O III] and [N II] maps is located along the polar directions inside the hourglass structure. Abundance gradients between the polar caps and equatorial region were not found. However, the outer part of the shell seems to be less abundant in oxygen and nitrogen than the inner regions. Detailed 2.5-dimensional photoionization modeling of the three-dimensional shell was performed using the mass distribution inferred from the observations and the presence of mass clumps. The resulting model grids are used to constrain the physical properties of the shell as well as the central ionizing source. A sequence of three-dimensional clumpy models including a disk-shaped ionization source is able to reproduce the ionization gradients between polar and equatorial regions of the shell. Differences between shell axial ratios in different lines can also be explained by aspherical illumination. A total shell mass of 9 x 10(-4) M(circle dot) is derived from these models. We estimate that 50%-70% of the shell mass is contained in neutral clumps with density contrast up to a factor of 30.
Resumo:
The concept of rainfall erosivity is extended to the estimation of catchment sediment yield and its variation over time. Five different formulations of rainfall erosivity indices, using annual, monthly and daily rainfall data, are proposed and tested on two catchments in the humid tropics of Australia. Rainfall erosivity indices, using simple power functions of annual and daily rainfall amounts, were found to be adequate in describing the interannual and seasonal variation of catchment sediment yield. The parameter values of these rainfall erosivity indices for catchment sediment yield are broadly similar to those for rainfall erosivity models in relation to the R-factor in the Universal Soil Loss Equation.
Resumo:
We study the implications for two-Higgs-doublet models of the recent announcement at the LHC giving a tantalizing hint for a Higgs boson of mass 125 GeV decaying into two photons. We require that the experimental result be within a factor of 2 of the theoretical standard model prediction, and analyze the type I and type II models as well as the lepton-specific and flipped models, subject to this requirement. It is assumed that there is no new physics other than two Higgs doublets. In all of the models, we display the allowed region of parameter space taking the recent LHC announcement at face value, and we analyze the W+W-, ZZ, (b) over barb, and tau(+)tau(-) expectations in these allowed regions. Throughout the entire range of parameter space allowed by the gamma gamma constraint, the numbers of events for Higgs decays into WW, ZZ, and b (b) over bar are not changed from the standard model by more than a factor of 2. In contrast, in the lepton-specific model, decays to tau(+)tau(-) are very sensitive across the entire gamma gamma-allowed region.
Resumo:
The idiomatic expression “In Rome be a Roman” can be applied to leadership training and development as well. Leaders who can act as role models inspire other future leaders in their behaviour, attitudes and ways of thinking. Based on two examples of current leaders in the fields of Politics and Public Administration, I support the idea that exposure to role models during their training was decisive for their career paths and current activities as prominent characters in their profession. Issues such as how students should be prepared for community or national leadership as well as cross-cultural engagement are raised here. The hypothesis of transculturalism and cross-cultural commitment as a factor of leadership is presented. Based on current literature on Leadership as well as the presented case studies, I expect to raise a debate focusing on strategies for improving leaders’ training in their cross-cultural awareness.
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.