946 resultados para Dwarf Galaxy Fornax Distribution Function Action Based


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research has expanded the knowledge in Bioinformatics and Data mining. It makes an influential contribution to the future research in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent all-object spectroscopic survey centred on the Fornax cluster of galaxies has discovered a population of subluminous and extremely compact members, called 'ultra-compact dwarf' (UCD) galaxies. In order to clarify the origin of these objects, we have used self-consistent numerical simulations to study the dynamical evolution a nucleated dwarf galaxy would undergo if orbiting the centre of the Fornax cluster and suffering from its strong tidal gravitational field. We find that the outer stellar components of a nucleated dwarf are removed by the strong tidal field of the cluster, whereas the nucleus manages to survive as a result of its initially compact nature. The developed naked nucleus is found to have physical properties (e. g. size and mass) similar to those observed for UCDs. We also find that although this formation process does not have a strong dependence on the initial total luminosity of the nucleated dwarf, it does depend on the radial density profile of the dark halo in the sense that UCDs are less likely to be formed from dwarfs embedded in dark matter haloes with central 'cuspy' density profiles. Our simulations also suggest that very massive and compact stellar systems can be rapidly and efficiently formed in the central regions of dwarfs through the merging of smaller star clusters. We provide some theoretical predictions on the total number and radial number density profile of UCDs in a cluster and their dependencies on cluster masses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The way mass is distributed in galaxies plays a major role in shaping their evolution across cosmic time. The galaxy's total mass is usually determined by tracing the motion of stars in its potential, which can be probed observationally by measuring stellar spectra at different distances from the galactic centre, whose kinematics is used to constrain dynamical models. A class of such models, commonly used to accurately determine the distribution of luminous and dark matter in galaxies, is that of equilibrium models. In this Thesis, a novel approach to the design of equilibrium dynamical models, in which the distribution function is an analytic function of the action integrals, is presented. Axisymmetric and rotating models are used to explain observations of a sample of nearby early-type galaxies in the Calar Alto Legacy Integral Field Area survey. Photometric and spectroscopic data for round and flattened galaxies are well fitted by the models, which are then used to get the galaxies' total mass distribution and orbital anisotropy. The time evolution of massive early-type galaxies is also investigated with numerical models. Their structural properties (mass, size, velocity dispersion) are observed to evolve, on average, with redshift. In particular, they appear to be significantly more compact at higher redshift, at fixed stellar mass, so it is interesting to investigate what drives such evolution. This Thesis focuses on the role played by dark-matter haloes: their mass-size and mass-velocity dispersion correlations evolve similarly to the analogous correlations of ellipticals; at fixed halo mass, the haloes are more compact at higher redshift, similarly to massive galaxies; a simple model, in which all the galaxy's size and velocity-dispersion evolution is due to the cosmological evolution of the underlying halo population, reproduces the observed size and velocity-dispersion of massive compact early-type galaxies up to redshift of about 2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an efficient algorithm for multi-objective distribution feeder reconfiguration based on Modified Honey Bee Mating Optimization (MHBMO) approach. The main objective of the Distribution feeder reconfiguration (DFR) is to minimize the real power loss, deviation of the nodes’ voltage. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. So the metahuristic algorithm has been applied to this problem. This paper describes the full algorithm to Objective functions paid, The results of simulations on a 32 bus distribution system is given and shown high accuracy and optimize the proposed algorithm in power loss minimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Different from other indicators of cardiac function, such as ejection fraction and transmitral early diastolic velocity, myocardial strain is promising to capture subtle alterations that result from early diseases of the myocardium. In order to extract the left ventricle (LV) myocardial strain and strain rate from cardiac cine-MRI, a modified hierarchical transformation model was proposed. Methods A hierarchical transformation model including the global and local LV deformations was employed to analyze the strain and strain rate of the left ventricle by cine-MRI image registration. The endocardial and epicardial contour information was introduced to enhance the registration accuracy by combining the original hierarchical algorithm with an Iterative Closest Points using Invariant Features algorithm. The hierarchical model was validated by a normal volunteer first and then applied to two clinical cases (i.e., the normal volunteer and a diabetic patient) to evaluate their respective function. Results Based on the two clinical cases, by comparing the displacement fields of two selected landmarks in the normal volunteer, the proposed method showed a better performance than the original or unmodified model. Meanwhile, the comparison of the radial strain between the volunteer and patient demonstrated their apparent functional difference. Conclusions The present method could be used to estimate the LV myocardial strain and strain rate during a cardiac cycle and thus to quantify the analysis of the LV motion function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we report an analysis of the protein sequence length distribution for 13 bacteria, four archaea and one eukaryote whose genomes have been completely sequenced, The frequency distribution of protein sequence length for all the 18 organisms are remarkably similar, independent of genome size and can be described in terms of a lognormal probability distribution function. A simple stochastic model based on multiplicative processes has been proposed to explain the sequence length distribution. The stochastic model supports the random-origin hypothesis of protein sequences in genomes. Distributions of large proteins deviate from the overall lognormal behavior. Their cumulative distribution follows a power-law analogous to Pareto's law used to describe the income distribution of the wealthy. The protein sequence length distribution in genomes of organisms has important implications for microbial evolution and applications. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Timer-based mechanisms are often used in several wireless systems to help a given (sink) node select the best helper node among many available nodes. Specifically, a node transmits a packet when its timer expires, and the timer value is a function of its local suitability metric. In practice, the best node gets selected successfully only if no other node's timer expires within a `vulnerability' window after its timer expiry. In this paper, we provide a complete closed-form characterization of the optimal metric-to-timer mapping that maximizes the probability of success for any probability distribution function of the metric. The optimal scheme is scalable, distributed, and much better than the popular inverse metric timer mapping. We also develop an asymptotic characterization of the optimal scheme that is elegant and insightful, and accurate even for a small number of nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near-wall structures in turbulent natural convection at Rayleigh numbers of $10^{10}$ to $10^{11}$ at A Schmidt number of 602 are visualized by a new method of driving the convection across a fine membrane using concentration differences of sodium chloride. The visualizations show the near-wall flow to consist of sheet plumes. A wide variety of large-scale flow cells, scaling with the cross-section dimension, are observed. Multiple large-scale flow cells are seen at aspect ratio (AR)= 0.65, while only a single circulation cell is detected at AR= 0.435. The cells (or the mean wind) are driven by plumes coming together to form columns of rising lighter fluid. The wind in turn aligns the sheet plumes along the direction of shear. the mean wind direction is seen to change with time. The near-wall dynamics show plumes initiated at points, which elongate to form sheets and then merge. Increase in rayleigh number results in a larger number of closely and regularly spaced plumes. The plume spacings show a common log–normal probability distribution function, independent of the rayleigh number and the aspect ratio. We propose that the near-wall structure is made of laminar natural-convection boundary layers, which become unstable to give rise to sheet plumes, and show that the predictions of a model constructed on this hypothesis match the experiments. Based on these findings, we conclude that in the presence of a mean wind, the local near-wall boundary layers associated with each sheet plume in high-rayleigh-number turbulent natural convection are likely to be laminar mixed convection type.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.