987 resultados para Standard models
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Spinal cord injury (SCI) is a devastating condition, which results from trauma to the cord, resulting in a primary injury response which leads to a secondary injury cascade, causing damage to both glial and neuronal cells. Following trauma, the central nervous system (CNS) fails to regenerate due to a plethora of both intrinsic and extrinsic factors. Unfortunately, these events lead to loss of both motor and sensory function and lifelong disability and care for sufferers of SCI. There have been tremendous advancements made in our understanding of the mechanisms behind axonal regeneration and remyelination of the damaged cord. These have provided many promising therapeutic targets. However, very few have made it to clinical application, which could potentially be due to inadequate understanding of compound mechanism of action and reliance on poor SCI models. This thesis describes the use of an established neural cell co-culture model of SCI as a medium throughput screen for compounds with potential therapeutic properties. A number of compounds were screened which resulted in a family of compounds, modified heparins, being taken forward for more intense investigation. Modified heparins (mHeps) are made up of the core heparin disaccharide unit with variable sulphation groups on the iduronic acid and glucosamine residues; 2-O-sulphate (C2), 6-O-sulphate (C6) and N-sulphate (N). 2-O-sulphated (mHep6) and N-sulphated (mHep7) heparin isomers were shown to promote both neurite outgrowth and myelination in the SCI model. It was found that both mHeps decreased oligodendrocyte precursor cell (OPC) proliferation and increased oligodendrocyte (OL) number adjacent to the lesion. However, there is a difference in the direct effects on the OL from each of the mHeps; mHep6 increased myelin internode length and mHep7 increased the overall cell size. It was further elucidated that these isoforms interact with and mediate both Wnt and FGF signalling. In OPC monoculture experiments FGF2 treated OPCs displayed increased proliferation but this effect was removed when co-treated with the mHeps. Therefore, suggesting that the mHeps interact with the ligand and inhibit FGF2 signalling. Additionally, it was shown that both mHeps could be partially mediating their effects through the Wnt pathway. mHep effects on both myelination and neurite outgrowth were removed when co-treated with a Wnt signalling inhibitor, suggesting cell signalling mediation by ligand immobilisation and signalling activation as a mechanistic action for the mHeps. However, the initial methods employed in this thesis were not sufficient to provide a more detailed study into the effects the mHeps have on neurite outgrowth. This led to the design and development of a novel microfluidic device (MFD), which provides a platform to study of axonal injury. This novel device is a three chamber device with two chambers converging onto a central open access chamber. This design allows axons from two points of origin to enter a chamber which can be subjected to injury, thus providing a platform in which targeted axonal injury and the regenerative capacity of a compound study can be performed. In conclusion, this thesis contributes to and advances the study of SCI in two ways; 1) identification and investigation of a novel set of compounds with potential therapeutic potential i.e. desulphated modified heparins. These compounds have multiple therapeutic properties and could revolutionise both the understanding of the basic pathological mechanisms underlying SCI but also be a powered therapeutic option. 2) Development of a novel microfluidic device to study in greater detail axonal biology, specifically, targeted axonal injury and treatment, providing a more representative model of SCI than standard in vitro models. Therefore, the MFD could lead to advancements and the identification of factors and compounds relating to axonal regeneration.
Resumo:
International audience
Resumo:
The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
We estimate a dynamic model of mortgage default for a cohort of Colombian debtors between 1997 and 2004. We use the estimated model to study the effects on default of a class of policies that affected the evolution of mortgage balances in Colombia during the 1990's. We propose a framework for estimating dynamic behavioral models accounting for the presence of unobserved state variables that are correlated across individuals and across time periods. We extend the standard literature on the structural estimation of dynamic models by incorporating an unobserved common correlated shock that affects all individuals' static payoffs and the dynamic continuation payoffs associated with different decisions. Given a standard parametric specification the dynamic problem, we show that the aggregate shocks are identified from the variation in the observed aggregate behavior. The shocks and their transition are separately identified, provided there is enough cross-sectionavl ariation of the observeds tates.
Resumo:
Doutoramento em Gestão
Resumo:
Obnoxious single facility location models are models that have the aim to find the best location for an undesired facility. Undesired is usually expressed in relation to the so-called demand points that represent locations hindered by the facility. Because obnoxious facility location models as a rule are multimodal, the standard techniques of convex analysis used for locating desirable facilities in the plane may be trapped in local optima instead of the desired global optimum. It is assumed that having more optima coincides with being harder to solve. In this thesis the multimodality of obnoxious single facility location models is investigated in order to know which models are challenging problems in facility location problems and which are suitable for site selection. Selected for this are the obnoxious facility models that appear to be most important in literature. These are the maximin model, that maximizes the minimum distance from demand point to the obnoxious facility, the maxisum model, that maximizes the sum of distance from the demand points to the facility and the minisum model, that minimizes the sum of damage of the facility to the demand points. All models are measured with the Euclidean distances and some models also with the rectilinear distance metric. Furthermore a suitable algorithm is selected for testing multimodality. Of the tested algorithms in this thesis, Multistart is most appropriate. A small numerical experiment shows that Maximin models have on average the most optima, of which the model locating an obnoxious linesegment has the most. Maximin models have few optima and are thus not very hard to solve. From the Minisum models, the models that have the most optima are models that take wind into account. In general can be said that the generic models have less optima than the weighted versions. Models that are measured with the rectilinear norm do have more solutions than the same models measured with the Euclidean norm. This can be explained for the maximin models in the numerical example because the shape of the norm coincides with a bound of the feasible area, so not all solutions are different optima. The difference found in number of optima of the Maxisum and Minisum can not be explained by this phenomenon.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
Silicon-based discrete high-power devices need to be designed with optimal performance up to several thousand volts and amperes to reach power ratings ranging from few kWs to beyond the 1 GW mark. To this purpose, a key element is the improvement of the junction termination (JT) since it allows to drastically reduce surface electric field peaks which may lead to an earlier device failure. This thesis will be mostly focused on the negative bevel termination which from several years constitutes a standard processing step in bipolar production lines. A simple methodology to realize its counterpart, a planar JT with variation of the lateral doping concentration (VLD) will be also described. On the JT a thin layer of a semi insulating material is usually deposited, which acts as passivation layer reducing the interface defects and contributing to increase the device reliability. A thorough understanding of how the passivation layer properties affect the breakdown voltage and the leakage current of a fast-recovery diode is fundamental to preserve the ideal termination effect and provide a stable blocking capability. More recently, amorphous carbon, also called diamond-like carbon (DLC), has been used as a robust surface passivation material. By using a commercial TCAD tool, a detailed physical explanation of DLC electrostatic and transport properties has been provided. The proposed approach is able to predict the breakdown voltage and the leakage current of a negative beveled power diode passivated with DLC as confirmed by the successfully validation against the available experiments. In addition, the VLD JT proposed to overcome the limitation of the negative bevel architecture has been simulated showing a breakdown voltage very close to the ideal one with a much smaller area consumption. Finally, the effect of a low junction depth on the formation of current filaments has been analyzed by performing reverse-recovery simulations.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
The present manuscript focuses on out of equilibrium physics in two dimensional models. It has the purpose of presenting some results obtained as part of out of equilibrium dynamics in its non perturbative aspects. This can be understood in two different ways: the former is related to integrability, which is non perturbative by nature; the latter is related to emergence of phenomena in the out of equilibirum dynamics of non integrable models that are not accessible by standard perturbative techniques. In the study of out of equilibirum dynamics, two different protocols are used througout this work: the bipartitioning protocol, within the Generalised Hydrodynamics (GHD) framework, and the quantum quench protocol. With GHD machinery we study the Staircase Model, highlighting how the hydrodynamic picture sheds new light into the physics of Integrable Quantum Field Theories; with quench protocols we analyse different setups where a non-perturbative description is needed and various dynamical phenomena emerge, such as the manifistation of a dynamical Gibbs effect, confinement and the emergence of Bloch oscillations preventing thermalisation.
Resumo:
The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.
Resumo:
To compare time and risk to biochemical recurrence (BR) after radical prostatectomy of two chronologically different groups of patients using the standard and the modified Gleason system (MGS). Cohort 1 comprised biopsies of 197 patients graded according to the standard Gleason system (SGS) in the period 1997/2004, and cohort 2, 176 biopsies graded according to the modified system in the period 2005/2011. Time to BR was analyzed with the Kaplan-Meier product-limit analysis and prediction of shorter time to recurrence using univariate and multivariate Cox proportional hazards model. Patients in cohort 2 reflected time-related changes: striking increase in clinical stage T1c, systematic use of extended biopsies, and lower percentage of total length of cancer in millimeter in all cores. The MGS used in cohort 2 showed fewer biopsies with Gleason score ≤ 6 and more biopsies of the intermediate Gleason score 7. Time to BR using the Kaplan-Meier curves showed statistical significance using the MGS in cohort 2, but not the SGS in cohort 1. Only the MGS predicted shorter time to BR on univariate analysis and on multivariate analysis was an independent predictor. The results favor that the 2005 International Society of Urological Pathology modified system is a refinement of the Gleason grading and valuable for contemporary clinical practice.
Resumo:
Prosopis rubriflora and Prosopis ruscifolia are important species in the Chaquenian regions of Brazil. Because of the restriction and frequency of their physiognomy, they are excellent models for conservation genetics studies. The use of microsatellite markers (Simple Sequence Repeats, SSRs) has become increasingly important in recent years and has proven to be a powerful tool for both ecological and molecular studies. In this study, we present the development and characterization of 10 new markers for P. rubriflora and 13 new markers for P. ruscifolia. The genotyping was performed using 40 P. rubriflora samples and 48 P. ruscifolia samples from the Chaquenian remnants in Brazil. The polymorphism information content (PIC) of the P. rubriflora markers ranged from 0.073 to 0.791, and no null alleles or deviation from Hardy-Weinberg equilibrium (HW) were detected. The PIC values for the P. ruscifolia markers ranged from 0.289 to 0.883, but a departure from HW and null alleles were detected for certain loci; however, this departure may have resulted from anthropic activities, such as the presence of livestock, which is very common in the remnant areas. In this study, we describe novel SSR polymorphic markers that may be helpful in future genetic studies of P. rubriflora and P. ruscifolia.