912 resultados para Dwarf Galaxy Fornax Distribution Function Action Based
Resumo:
The well--known Minkowski's? $(x)$ function is presented as the asymptotic distribution function of an enumeration of the rationals in (0,1] based on their continued fraction representation. Besides, the singularity of ?$(x)$ is clearly proved in two ways: by exhibiting a set of measure one in which ?ï$(x)$ = 0; and again by actually finding a set of measure one which is mapped onto a set of measure zero and viceversa. These sets are described by means of metrical properties of different systems for real number representation.
Resumo:
We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.
Resumo:
A mixture of Chlorhexidine digluconate (CHG) with glycerophospholipid 1,2-dimyristoyl- <^54-glycero-3-phospocholine (DMPC-rf54) was analysed using ^H nuclear magnetic resonance. To analyze powder spectra, the de-Pake-ing technique was used. The method is able to extract simultaneously both the orientation distribution function and the anisotropy distribution function. The spectral moments, average order parameter profiles, and longitudinal and transverse relaxation times were used to explore the structural phase behaviour of various DMPC/CHG mixtures in the temperature range 5-60°C.
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
Darrerament, l'interès pel desenvolupament d'aplicacions amb robots submarins autònoms (AUV) ha crescut de forma considerable. Els AUVs són atractius gràcies al seu tamany i el fet que no necessiten un operador humà per pilotar-los. Tot i això, és impossible comparar, en termes d'eficiència i flexibilitat, l'habilitat d'un pilot humà amb les escasses capacitats operatives que ofereixen els AUVs actuals. L'utilització de AUVs per cobrir grans àrees implica resoldre problemes complexos, especialment si es desitja que el nostre robot reaccioni en temps real a canvis sobtats en les condicions de treball. Per aquestes raons, el desenvolupament de sistemes de control autònom amb l'objectiu de millorar aquestes capacitats ha esdevingut una prioritat. Aquesta tesi tracta sobre el problema de la presa de decisions utilizant AUVs. El treball presentat es centra en l'estudi, disseny i aplicació de comportaments per a AUVs utilitzant tècniques d'aprenentatge per reforç (RL). La contribució principal d'aquesta tesi consisteix en l'aplicació de diverses tècniques de RL per tal de millorar l'autonomia dels robots submarins, amb l'objectiu final de demostrar la viabilitat d'aquests algoritmes per aprendre tasques submarines autònomes en temps real. En RL, el robot intenta maximitzar un reforç escalar obtingut com a conseqüència de la seva interacció amb l'entorn. L'objectiu és trobar una política òptima que relaciona tots els estats possibles amb les accions a executar per a cada estat que maximitzen la suma de reforços totals. Així, aquesta tesi investiga principalment dues tipologies d'algoritmes basats en RL: mètodes basats en funcions de valor (VF) i mètodes basats en el gradient (PG). Els resultats experimentals finals mostren el robot submarí Ictineu en una tasca autònoma real de seguiment de cables submarins. Per portar-la a terme, s'ha dissenyat un algoritme anomenat mètode d'Actor i Crític (AC), fruit de la fusió de mètodes VF amb tècniques de PG.
Resumo:
A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
Resumo:
We present an analysis of a cusp ion step, observed by the Defense Meteorological Satellite Program (DMSP) F10 spacecraft, between two poleward moving events of enhanced ionospheric electron temperature, observed by the European Incoherent Scatter (EISCAT) radar. From the ions detected by the satellite, the variation of the reconnection rate is computed for assumed distances along the open-closed field line separatrix from the satellite to the X line, do. Comparison with the onset times of the associated ionospheric events allows this distance to be estimated, but with an uncertainty due to the determination of the low-energy cutoff of the ion velocity distribution function, ƒ(ν). Nevertheless, the reconnection site is shown to be on the dayside magnetopause, consistent with the reconnection model of the cusp during southward interplanetary magnetic field (IMF). Analysis of the time series of distribution function at constant energies, ƒ(ts), shows that the best estimate of the distance do is 14.5±2 RE. This is consistent with various magnetopause observations of the signatures of reconnection for southward IMF. The ion precipitation is used to reconstruct the field-parallel part of the Cowley D ion distribution function injected into the open low-latitude boundary layer in the vicinity of the X line. From this reconstruction, the field-aligned component of the magnetosheath flow is found to be only −55±65 km s−1 near the X line, which means either that the reconnection X line is near the stagnation region at the nose of the magnetosphere, or that it is closely aligned with the magnetosheath flow streamline which is orthogonal to the magnetosheath field, or both. In addition, the sheath Alfvén speed at the X line is found to be 220±45 km s−1, and the speed with which newly opened field lines are ejected from the X line is 165±30 km s−1. We show that the inferred magnetic field, plasma density, and temperature of the sheath near the X line are consistent with a near-subsolar reconnection site and confirm that the magnetosheath field makes a large angle (>58°) with the X line.
Resumo:
The environment where galaxies are found heavily influences their evolution. Close groupings, like the ones in the cores of galaxy clusters or compact groups, evolve in ways far more dramatic than their isolated counterparts. We have conducted a multi-wavelength study of Hickson Compact Group 7 (HCG 7), consisting of four giant galaxies: three spirals and one lenticular. We use Hubble Space Telescope (HST) imaging to identify and characterize the young and old star cluster populations. We find young massive clusters (YMCs) mostly in the three spirals, while the lenticular features a large, unimodal population of globular clusters (GCs) but no detectable clusters with ages less than a few Gyr. The spatial and approximate age distributions of the similar to 300 YMCs and similar to 150 GCs thus hint at a regular star formation history in the group over a Hubble time. While at first glance the HST data show the galaxies as undisturbed, our deep ground-based, wide-field imaging that extends the HST coverage reveals faint signatures of stellar material in the intragroup medium (IGM). We do not, however, detect the IGM in H I or Chandra X-ray observations, signatures that would be expected to arise from major mergers. Despite this fact, we find that the H I gas content of the individual galaxies and the group as a whole are a third of the expected abundance. The appearance of quiescence is challenged by spectroscopy that reveals an intense ionization continuum in one galaxy nucleus, and post-burst characteristics in another. Our spectroscopic survey of dwarf galaxy members yields a single dwarf elliptical galaxy in an apparent stellar tidal feature. Based on all this information, we suggest an evolutionary scenario for HCG 7, whereby the galaxies convert most of their available gas into stars without the influence of major mergers and ultimately result in a dry merger. As the conditions governing compact groups are reminiscent of galaxies at intermediate redshift, we propose that HCGs are appropriate for studying galaxy evolution at z similar to 1-2.
Resumo:
The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q(2) is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribtuion function exhibits the power law xg(x, Q(2)) = h(Q(2))x(-epsilon). Assuming that the Q(2) scale is proportional to the dynamical gluon mass one, we show that the values of h(Q(2)) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales.
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q 2 is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribution function exhibits the power law xg(x, Q 2) = h(Q 2)x( -∈). Assuming that the Q 2 scale is proportional to the dynamical gluon mass one, we show that the values of h(Q 2) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales. © 2008 World Scientific Publishing Company.
Resumo:
Parametric VaR (Value-at-Risk) is widely used due to its simplicity and easy calculation. However, the normality assumption, often used in the estimation of the parametric VaR, does not provide satisfactory estimates for risk exposure. Therefore, this study suggests a method for computing the parametric VaR based on goodness-of-fit tests using the empirical distribution function (EDF) for extreme returns, and compares the feasibility of this method for the banking sector in an emerging market and in a developed one. The paper also discusses possible theoretical contributions in related fields like enterprise risk management (ERM). © 2013 Elsevier Ltd.
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
Greenhouse gas emission reduction is the pillar of the Kyoto Protocol and one of the main goals of the European Union (UE) energy policy. National reduction targets for EU member states and an overall target for the EU-15 (8%) were set by the Kyoto Protocol. This reduction target is based on emissions in the reference year (1990) and must be reached by 2012. EU energy policy does not set any national targets, only an overall reduction target of 20% by 2020. This paper transfers global greenhouse gas emission reduction targets in both these documents to the transport sector and specifically to CO2 emissions. It proposes a nonlinear distribution method with objective, dynamic targets for reducing CO2 emissions in the transport sector, according to the context and characteristics of each geographical area. First, we analyse CO2 emissions from transport in the reference year (1990) and their evolution from 1990 to 2007. We then propose a nonlinear methodology for distributing dynamic CO2 emission reduction targets. We have applied the proposed distribution function for 2012 and 2020 at two territorial levels (EU member states and Spanish autonomous regions). The weighted distribution is based on per capita CO2 emissions and CO2 emissions per gross domestic product. Finally, we show the weighted targets found for each EU member state and each Spanish autonomous region, compare them with the real achievements to date, and forecast the situation for the years the Kyoto and EU goals are to be met. The results underline the need for ?weighted? decentralised decisions to be made at different territorial levels with a view to achieving a common goal, so relative convergence of all the geographical areas is reached over time. Copyright © 2011 John Wiley & Sons, Ltd.