919 resultados para Dwarf Galaxy Fornax Distribution Function Action Based
Resumo:
Darrerament, l'interès pel desenvolupament d'aplicacions amb robots submarins autònoms (AUV) ha crescut de forma considerable. Els AUVs són atractius gràcies al seu tamany i el fet que no necessiten un operador humà per pilotar-los. Tot i això, és impossible comparar, en termes d'eficiència i flexibilitat, l'habilitat d'un pilot humà amb les escasses capacitats operatives que ofereixen els AUVs actuals. L'utilització de AUVs per cobrir grans àrees implica resoldre problemes complexos, especialment si es desitja que el nostre robot reaccioni en temps real a canvis sobtats en les condicions de treball. Per aquestes raons, el desenvolupament de sistemes de control autònom amb l'objectiu de millorar aquestes capacitats ha esdevingut una prioritat. Aquesta tesi tracta sobre el problema de la presa de decisions utilizant AUVs. El treball presentat es centra en l'estudi, disseny i aplicació de comportaments per a AUVs utilitzant tècniques d'aprenentatge per reforç (RL). La contribució principal d'aquesta tesi consisteix en l'aplicació de diverses tècniques de RL per tal de millorar l'autonomia dels robots submarins, amb l'objectiu final de demostrar la viabilitat d'aquests algoritmes per aprendre tasques submarines autònomes en temps real. En RL, el robot intenta maximitzar un reforç escalar obtingut com a conseqüència de la seva interacció amb l'entorn. L'objectiu és trobar una política òptima que relaciona tots els estats possibles amb les accions a executar per a cada estat que maximitzen la suma de reforços totals. Així, aquesta tesi investiga principalment dues tipologies d'algoritmes basats en RL: mètodes basats en funcions de valor (VF) i mètodes basats en el gradient (PG). Els resultats experimentals finals mostren el robot submarí Ictineu en una tasca autònoma real de seguiment de cables submarins. Per portar-la a terme, s'ha dissenyat un algoritme anomenat mètode d'Actor i Crític (AC), fruit de la fusió de mètodes VF amb tècniques de PG.
Resumo:
A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
Resumo:
We present an analysis of a cusp ion step, observed by the Defense Meteorological Satellite Program (DMSP) F10 spacecraft, between two poleward moving events of enhanced ionospheric electron temperature, observed by the European Incoherent Scatter (EISCAT) radar. From the ions detected by the satellite, the variation of the reconnection rate is computed for assumed distances along the open-closed field line separatrix from the satellite to the X line, do. Comparison with the onset times of the associated ionospheric events allows this distance to be estimated, but with an uncertainty due to the determination of the low-energy cutoff of the ion velocity distribution function, ƒ(ν). Nevertheless, the reconnection site is shown to be on the dayside magnetopause, consistent with the reconnection model of the cusp during southward interplanetary magnetic field (IMF). Analysis of the time series of distribution function at constant energies, ƒ(ts), shows that the best estimate of the distance do is 14.5±2 RE. This is consistent with various magnetopause observations of the signatures of reconnection for southward IMF. The ion precipitation is used to reconstruct the field-parallel part of the Cowley D ion distribution function injected into the open low-latitude boundary layer in the vicinity of the X line. From this reconstruction, the field-aligned component of the magnetosheath flow is found to be only −55±65 km s−1 near the X line, which means either that the reconnection X line is near the stagnation region at the nose of the magnetosphere, or that it is closely aligned with the magnetosheath flow streamline which is orthogonal to the magnetosheath field, or both. In addition, the sheath Alfvén speed at the X line is found to be 220±45 km s−1, and the speed with which newly opened field lines are ejected from the X line is 165±30 km s−1. We show that the inferred magnetic field, plasma density, and temperature of the sheath near the X line are consistent with a near-subsolar reconnection site and confirm that the magnetosheath field makes a large angle (>58°) with the X line.
Resumo:
The environment where galaxies are found heavily influences their evolution. Close groupings, like the ones in the cores of galaxy clusters or compact groups, evolve in ways far more dramatic than their isolated counterparts. We have conducted a multi-wavelength study of Hickson Compact Group 7 (HCG 7), consisting of four giant galaxies: three spirals and one lenticular. We use Hubble Space Telescope (HST) imaging to identify and characterize the young and old star cluster populations. We find young massive clusters (YMCs) mostly in the three spirals, while the lenticular features a large, unimodal population of globular clusters (GCs) but no detectable clusters with ages less than a few Gyr. The spatial and approximate age distributions of the similar to 300 YMCs and similar to 150 GCs thus hint at a regular star formation history in the group over a Hubble time. While at first glance the HST data show the galaxies as undisturbed, our deep ground-based, wide-field imaging that extends the HST coverage reveals faint signatures of stellar material in the intragroup medium (IGM). We do not, however, detect the IGM in H I or Chandra X-ray observations, signatures that would be expected to arise from major mergers. Despite this fact, we find that the H I gas content of the individual galaxies and the group as a whole are a third of the expected abundance. The appearance of quiescence is challenged by spectroscopy that reveals an intense ionization continuum in one galaxy nucleus, and post-burst characteristics in another. Our spectroscopic survey of dwarf galaxy members yields a single dwarf elliptical galaxy in an apparent stellar tidal feature. Based on all this information, we suggest an evolutionary scenario for HCG 7, whereby the galaxies convert most of their available gas into stars without the influence of major mergers and ultimately result in a dry merger. As the conditions governing compact groups are reminiscent of galaxies at intermediate redshift, we propose that HCGs are appropriate for studying galaxy evolution at z similar to 1-2.
Resumo:
The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q(2) is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribtuion function exhibits the power law xg(x, Q(2)) = h(Q(2))x(-epsilon). Assuming that the Q(2) scale is proportional to the dynamical gluon mass one, we show that the values of h(Q(2)) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales.
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q 2 is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribution function exhibits the power law xg(x, Q 2) = h(Q 2)x( -∈). Assuming that the Q 2 scale is proportional to the dynamical gluon mass one, we show that the values of h(Q 2) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales. © 2008 World Scientific Publishing Company.
Resumo:
Parametric VaR (Value-at-Risk) is widely used due to its simplicity and easy calculation. However, the normality assumption, often used in the estimation of the parametric VaR, does not provide satisfactory estimates for risk exposure. Therefore, this study suggests a method for computing the parametric VaR based on goodness-of-fit tests using the empirical distribution function (EDF) for extreme returns, and compares the feasibility of this method for the banking sector in an emerging market and in a developed one. The paper also discusses possible theoretical contributions in related fields like enterprise risk management (ERM). © 2013 Elsevier Ltd.
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
Greenhouse gas emission reduction is the pillar of the Kyoto Protocol and one of the main goals of the European Union (UE) energy policy. National reduction targets for EU member states and an overall target for the EU-15 (8%) were set by the Kyoto Protocol. This reduction target is based on emissions in the reference year (1990) and must be reached by 2012. EU energy policy does not set any national targets, only an overall reduction target of 20% by 2020. This paper transfers global greenhouse gas emission reduction targets in both these documents to the transport sector and specifically to CO2 emissions. It proposes a nonlinear distribution method with objective, dynamic targets for reducing CO2 emissions in the transport sector, according to the context and characteristics of each geographical area. First, we analyse CO2 emissions from transport in the reference year (1990) and their evolution from 1990 to 2007. We then propose a nonlinear methodology for distributing dynamic CO2 emission reduction targets. We have applied the proposed distribution function for 2012 and 2020 at two territorial levels (EU member states and Spanish autonomous regions). The weighted distribution is based on per capita CO2 emissions and CO2 emissions per gross domestic product. Finally, we show the weighted targets found for each EU member state and each Spanish autonomous region, compare them with the real achievements to date, and forecast the situation for the years the Kyoto and EU goals are to be met. The results underline the need for ?weighted? decentralised decisions to be made at different territorial levels with a view to achieving a common goal, so relative convergence of all the geographical areas is reached over time. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.
Resumo:
Blue whiting (Micromesistius poutassou, http://www.marinespecies.org/aphia.php?p=taxdetails&id=126439) is a small mesopelagic planktivorous gadoid found throughout the North-East Atlantic. This data contains the results of a model-based analysis of larvae captured by the Continuous Plankton Recorder (CPR) during the period 1951-2005. The observations are analysed using Generalised Additive Models (GAMs) of the the spatial, seasonal and interannual variation in the occurrence of larvae. The best fitting model is chosen using the Aikaike Information Criteria (AIC). The probability of occurrence in the continous plankton recorder is then normalised and converted to a probability distribution function in space (UTM projection Zone 28) and season (day of year). The best fitting model splits the distribution into two separate spawning grounds north and south of a dividing line at 53 N. The probability distribution is therefore normalised in these two regions (ie the space-time integral over each of the two regions is 1). The modelled outputs are on a UTM Zone 28 grid: however, for convenience, the latitude ("lat") and longitude ("lon") of each of these grid points are also included as a variable in the NetCDF file. The assignment of each grid point to either the Northern or Southern component (defined here as north/south of 53 N), is also included as a further variable ("component"). Finally, the day of year ("doy") is stored as the number of days elapsed from and included January 1 (ie doy=1 on January 1) - the year is thereafter divided into 180 grid points.
Resumo:
A new approach based on the nonlocal density functional theory to determine pore size distribution (PSD) of activated carbons and energetic heterogeneity of the pore wall is proposed. The energetic heterogeneity is modeled with an energy distribution function (EDF), describing the distribution of solid-fluid potential well depth (this distribution is a Dirac delta function for an energetic homogeneous surface). The approach allows simultaneous determining of the PSD (assuming slit shape) and EDF from nitrogen or argon isotherms at their respective boiling points by using a set of local isotherms calculated for a range of pore widths and solid-fluid potential well depths. It is found that the structure of the pore wall surface significantly differs from that of graphitized carbon black. This could be attributed to defects in the crystalline structure of the surface, active oxide centers, finite size of the pore walls (in either wall thickness or pore length), and so forth. Those factors depend on the precursor and the process of carbonization and activation and hence provide a fingerprint for each adsorbent. The approach allows very accurate correlation of the experimental adsorption isotherm and leads to PSDs that are simpler and more realistic than those obtained with the original nonlocal density functional theory.
Using patients' and rheumatologists' opinions to specify a short form of the WOMAC function subscale
Resumo:
Background: The WOMAC ( Western Ontario and McMaster Universities) function subscale is widely used in clinical trials of hip and knee osteoarthritis. Reducing the number of items of the subscale would enhance efficiency and compliance, particularly for use in clinical practice applications. Objective: To develop a short form of the WOMAC function subscale based on patients' and experts' opinions ( WOMAC function short form). Methods: WOMAC function subscale data ( Likert version) were obtained from 1218 outpatients with painful hip or knee osteoarthritis. These patients and their rheumatologists selected the five items that they considered most in need of improvement. The rheumatologists were asked to select the five items for which patients in general are the most impaired. Items that were least important to patients and experts, those with a high proportion of missing data, and those with a response distribution showing a floor or ceiling response were excluded, along with one of a pair of items with a correlation coefficient >0.75. Results: The WOMAC function short form included items 1, 2, 3, 6, 7, 8, 9, and 15 of the long form. The short form did not differ substantially from the long form in responsiveness ( standardised response mean of 0.84 v 0.80). Conclusions: A short form of the WOMAC function subscale was developed according to the views of patients and rheumatologists, based on the responses of 1218 patients and 399 rheumatologists. The clinical relevance and applicability of this WOMAC function subscale short form require further evaluation.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.