972 resultados para Gaussian and t-copulas
Resumo:
We analyse the dependence of the luminosity function (LF) of galaxies in groups on group dynamical state. We use the Gaussianity of the velocity distribution of galaxy members as a measurement of the dynamical equilibrium of groups identified in the Sloan Digital Sky Survey Data Release 7 by Zandivarez & Martinez. We apply the Anderson-Darling goodness-of-fit test to distinguish between groups according to whether they have Gaussian or non-Gaussian velocity distributions, i.e. whether they are relaxed or not. For these two subsamples, we compute the (0.1)r-band LF as a function of group virial mass and group total luminosity. For massive groups, , we find statistically significant differences between the LF of the two subsamples: the LFs of groups that have Gaussian velocity distributions have a brighter characteristic absolute magnitude (similar to 0.3 mag) and a steeper faint-end slope (similar to 0.25). We detect a similar effect when comparing the LF of bright [M-0.1r(group) - 5log(h) < -23.5] Gaussian and non-Gaussian groups. Our results indicate that, for massive/luminous groups, the dynamical state of the system is directly related to the luminosity of its galaxy members.
Resumo:
Computer simulations have become an important tool in physics. Especially systems in the solid state have been investigated extensively with the help of modern computational methods. This thesis focuses on the simulation of hydrogen-bonded systems, using quantum chemical methods combined with molecular dynamics (MD) simulations. MD simulations are carried out for investigating the energetics and structure of a system under conditions that include physical parameters such as temperature and pressure. Ab initio quantum chemical methods have proven to be capable of predicting spectroscopic quantities. The combination of these two features still represents a methodological challenge. Furthermore, conventional MD simulations consider the nuclei as classical particles. Not only motional effects, but also the quantum nature of the nuclei are expected to influence the properties of a molecular system. This work aims at a more realistic description of properties that are accessible via NMR experiments. With the help of the path integral formalism the quantum nature of the nuclei has been incorporated and its influence on the NMR parameters explored. The effect on both the NMR chemical shift and the Nuclear Quadrupole Coupling Constants (NQCC) is presented for intra- and intermolecular hydrogen bonds. The second part of this thesis presents the computation of electric field gradients within the Gaussian and Augmented Plane Waves (GAPW) framework, that allows for all-electron calculations in periodic systems. This recent development improves the accuracy of many calculations compared to the pseudopotential approximation, which treats the core electrons as part of an effective potential. In combination with MD simulations of water, the NMR longitudinal relaxation times for 17O and 2H have been obtained. The results show a considerable agreement with the experiment. Finally, an implementation of the calculation of the stress tensor into the quantum chemical program suite CP2K is presented. This enables MD simulations under constant pressure conditions, which is demonstrated with a series of liquid water simulations, that sheds light on the influence of the exchange-correlation functional used on the density of the simulated liquid.
Resumo:
Objective: To quantify time caring, burden and health status in carers of stroke patients after discharge from rehabilitation; to identify the potentially modifiable sociodemographic and clinical characteristics associated with these outcomes. Methods: Patients and carers prospectively interviewed 6 (n = 71) and 12 (n = 57) months after discharge. Relationships of carer and patient variables with burden, health status and time analysed by Gaussian and Poisson regression. Results: Carers showed considerable burden at 6 and 12 months. Carers spent 4.6 and 3.6 hours per day assisting patients with daily activities at 6 and 12 months, respectively. Improved patient motor and cognitive function were associated with reductions of up to 20 minutes per day in time spent in daily activities. Better patient mental health and cognitive function were associated with better carer mental health. Conclusions: Potentially modifiable factors such as these may be able to be targeted by caregiver training, support and education programmes and outpatient therapy for patients.
Resumo:
The XPS peaks of Fe 3p for Fe2+ and Fe3+ in FeO and Fe2O3, respectively, have been measured and the effects of curve fitting parameters on interpretation of the data have been analysed. Firstly, the peak fit parameters, i.e. (1) the number of peaks to be deconvoluted, (2) the range of the peak for back ground subtraction, (3) straight line (Li) or the Shirley (Sh) background subtraction method, (4) GL ratio (the ratio of Gaussian and Lorentzian contribution to the peak shape) and (5) asymmetry factor (AS), are manually selected. Secondly, the standard peak fit parameters were systematically investigated. The peak shape was fitted to a Voigt function by changing the peak position, the peak height and the full width at half maximum (FWHM) to minimize the chi(2). The recommended peak positions and peak parameters for Fe2+ and Fe3+ in iron oxides have been determined. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We report observations of the diffraction pattern resulting when a nematic liquid crystal is illuminated with two equal power, high intensity beams of light from an Ar+ laser. The time evolution of the pattern is followed from the initial production of higher diffraction orders to a final striking display arising as a result of the self-diffraction of the two incident beams. The experimental results are described with good approximation by a model assuming a phase distribution at the output plane of the liquid crystal in the form of the sum of a gaussian and a sinusoid.
Resumo:
We report observations of the diffraction pattern resulting when a nematic liquid crystal is illuminated with two equal power, high intensity beams of light from an Ar+ laser. The time evolution of the pattern is followed from the initial production of higher diffraction orders to a final striking display arising as a result of the self-diffraction of the two incident beams. The experimental results are described with good approximation by a model assuming a phase distribution at the output plane of the liquid crystal in the form of the sum of a gaussian and a sinusoid.
Resumo:
We present first experimental investigation of fast-intensity dynamics of random distributed feedback (DFB) fiber lasers. We found that the laser dynamics are stochastic on a short time scale and exhibit pronounced fluctuations including generation of extreme events. We also experimentally characterize statistical properties of radiation of random DFB fiber lasers. We found that statistical properties deviate from Gaussian and depend on the pump power.
Resumo:
In this paper, we consider the transmission of confidential information over a κ-μ fading channel in the presence of an eavesdropper who also experiences κ-μ fading. In particular, we obtain novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOPL) for independent and non-identically distributed channel coefficients without parameter constraints. We also provide a closed-form expression for the probability of SPSC when the μ parameter is assumed to take positive integer values. Monte-Carlo simulations are performed to verify the derived results. The versatility of the κ-μ fading model means that the results presented in this paper can be used to determine the probability of SPSC and SOPL for a large number of other fading scenarios, such as Rayleigh, Rice (Nakagamin), Nakagami-m, One-Sided Gaussian, and mixtures of these common fading models. In addition, due to the duality of the analysis of secrecy capacity and co-channel interference (CCI), the results presented here will have immediate applicability in the analysis of outage probability in wireless systems affected by CCI and background noise (BN). To demonstrate the efficacy of the novel formulations proposed here, we use the derived equations to provide a useful insight into the probability of SPSC and SOPL for a range of emerging wireless applications, such as cellular device-to-device, peer-to-peer, vehicle-to-vehicle, and body centric communications using data obtained from real channel measurements.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.
Resumo:
A single-beam gradient trap could potentially be used to hold a stylus for scanning force microscopy. With a view to development of this technique, we modeled the optical trap as a harmonic oscillator and therefore characterized it by its force constant. We measured force constants and resonant frequencies for 1-4-mu m-diameter polystyrene spheres in a single-beam gradient trap using measurements of back-scattered light. Force constants were determined with both Gaussian and doughnut laser modes, with powers of 3 and 1 mW, respectively. Typical values for spring constants were measured to be between 10(-6) and 4 x 10(-6) N/m. The resonant frequencies of trapped particles were measured to be between 1 and 10 kHz, and the rms amplitudes of oscillations were estimated to be around 40 nm. Our results confirm that the use of the doughnut mode for single-beam trapping is more efficient in the axial direction. (C) 1996 Optical Society of America.
Resumo:
We present an experimental and numerical study on the influence that particle aspect ratio has on the mechanical and structural properties of granular packings. For grains with maximal symmetry (squares), the stress propagation in the packing localizes forming chainlike forces analogous to the ones observed for spherical grains. This scenario can be understood in terms of stochastic models of aggregation and random multiplicative processes. As the grains elongate, the stress propagation is strongly affected. The interparticle normal force distribution tends toward a Gaussian, and, correspondingly, the force chains spread leading to a more uniform stress distribution reminiscent of the hydrostatic profiles known for standard liquids
Resumo:
For my Licentiate thesis, I conducted research on risk measures. Continuing with this research, I now focus on capital allocation. In the proportional capital allocation principle, the choice of risk measure plays a very important part. In the chapters Introduction and Basic concepts, we introduce three definitions of economic capital, discuss the purpose of capital allocation, give different viewpoints of capital allocation and present an overview of relevant literature. Risk measures are defined and the concept of coherent risk measure is introduced. Examples of important risk measures are given, e. g., Value at Risk (VaR), Tail Value at Risk (TVaR). We also discuss the implications of dependence and review some important distributions. In the following chapter on Capital allocation we introduce different principles for allocating capital. We prefer to work with the proportional allocation method. In the following chapter, Capital allocation based on tails, we focus on insurance business lines with heavy-tailed loss distribution. To emphasize capital allocation based on tails, we define the following risk measures: Conditional Expectation, Upper Tail Covariance and Tail Covariance Premium Adjusted (TCPA). In the final chapter, called Illustrative case study, we simulate two sets of data with five insurance business lines using Normal copulas and Cauchy copulas. The proportional capital allocation is calculated using TCPA as risk measure. It is compared with the result when VaR is used as risk measure and with covariance capital allocation. In this thesis, it is emphasized that no single allocation principle is perfect for all purposes. When focusing on the tail of losses, the allocation based on TCPA is a good one, since TCPA in a sense includes features of TVaR and Tail covariance.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
En écologie, dans le cadre par exemple d’études des services fournis par les écosystèmes, les modélisations descriptive, explicative et prédictive ont toutes trois leur place distincte. Certaines situations bien précises requièrent soit l’un soit l’autre de ces types de modélisation ; le bon choix s’impose afin de pouvoir faire du modèle un usage conforme aux objectifs de l’étude. Dans le cadre de ce travail, nous explorons dans un premier temps le pouvoir explicatif de l’arbre de régression multivariable (ARM). Cette méthode de modélisation est basée sur un algorithme récursif de bipartition et une méthode de rééchantillonage permettant l’élagage du modèle final, qui est un arbre, afin d’obtenir le modèle produisant les meilleures prédictions. Cette analyse asymétrique à deux tableaux permet l’obtention de groupes homogènes d’objets du tableau réponse, les divisions entre les groupes correspondant à des points de coupure des variables du tableau explicatif marquant les changements les plus abrupts de la réponse. Nous démontrons qu’afin de calculer le pouvoir explicatif de l’ARM, on doit définir un coefficient de détermination ajusté dans lequel les degrés de liberté du modèle sont estimés à l’aide d’un algorithme. Cette estimation du coefficient de détermination de la population est pratiquement non biaisée. Puisque l’ARM sous-tend des prémisses de discontinuité alors que l’analyse canonique de redondance (ACR) modélise des gradients linéaires continus, la comparaison de leur pouvoir explicatif respectif permet entre autres de distinguer quel type de patron la réponse suit en fonction des variables explicatives. La comparaison du pouvoir explicatif entre l’ACR et l’ARM a été motivée par l’utilisation extensive de l’ACR afin d’étudier la diversité bêta. Toujours dans une optique explicative, nous définissons une nouvelle procédure appelée l’arbre de régression multivariable en cascade (ARMC) qui permet de construire un modèle tout en imposant un ordre hiérarchique aux hypothèses à l’étude. Cette nouvelle procédure permet d’entreprendre l’étude de l’effet hiérarchisé de deux jeux de variables explicatives, principal et subordonné, puis de calculer leur pouvoir explicatif. L’interprétation du modèle final se fait comme dans une MANOVA hiérarchique. On peut trouver dans les résultats de cette analyse des informations supplémentaires quant aux liens qui existent entre la réponse et les variables explicatives, par exemple des interactions entres les deux jeux explicatifs qui n’étaient pas mises en évidence par l’analyse ARM usuelle. D’autre part, on étudie le pouvoir prédictif des modèles linéaires généralisés en modélisant la biomasse de différentes espèces d’arbre tropicaux en fonction de certaines de leurs mesures allométriques. Plus particulièrement, nous examinons la capacité des structures d’erreur gaussienne et gamma à fournir les prédictions les plus précises. Nous montrons que pour une espèce en particulier, le pouvoir prédictif d’un modèle faisant usage de la structure d’erreur gamma est supérieur. Cette étude s’insère dans un cadre pratique et se veut un exemple pour les gestionnaires voulant estimer précisément la capture du carbone par des plantations d’arbres tropicaux. Nos conclusions pourraient faire partie intégrante d’un programme de réduction des émissions de carbone par les changements d’utilisation des terres.