978 resultados para STARS FUNDAMENTAL PARAMETERS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Although evolutionary models of cooperation build on the intuition that costs of the donor and benefits to the receiver are the most general fundamental parameters, it is largely unknown how they affect the decision of animals to cooperate with an unrelated social partner. Here we test experimentally whether costs to the donor and need of the receiver decide about the amount of help provided by unrelated rats in an iterated prisoner's dilemma game. Results Fourteen unrelated Norway rats were alternately presented to a cooperative or defective partner for whom they could provide food via a mechanical apparatus. Direct costs for this task and the need of the receiver were manipulated in two separate experiments. Rats provided more food to cooperative partners than to defectors (direct reciprocity). The propensity to discriminate between helpful and non-helpful social partners was contingent on costs: An experimentally increased resistance in one Newton steps to pull food for the social partner reduced the help provided to defectors more strongly than the help returned to cooperators. Furthermore, test rats provided more help to hungry receivers that were light or in poor condition, which might suggest empathy, whereas this relationship was inverse when experimental partners were satiated. Conclusions In a prisoner's dilemma situation rats seem to take effect of own costs and potential benefits to a receiver when deciding about helping a social partner, which confirms the predictions of reciprocal cooperation. Thus, factors that had been believed to be largely confined to human social behaviour apparently influence the behaviour of other social animals as well, despite widespread scepticism. Therefore our results shed new light on the biological basis of reciprocity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The response of high-speed bridges at resonance, particularly under flexural vibrations, constitutes a subject of research for many scientists and engineers at the moment. The topic is of great interest because, as a matter of fact, such kind of behaviour is not unlikely to happen due to the elevated operating speeds of modern rains, which in many cases are equal to or even exceed 300 km/h ( [1,2]). The present paper addresses the subject of the evolution of the wheel-rail contact forces during resonance situations in simply supported bridges. Based on a dimensionless formulation of the equations of motion presented in [4], very similar to the one introduced by Klasztorny and Langer in [3], a parametric study is conducted and the contact forces in realistic situations analysed in detail. The effects of rail and wheel irregularities are not included in the model. The bridge is idealised as an Euler-Bernoulli beam, while the train is simulated by a system consisting of rigid bodies, springs and dampers. The situations such that a severe reduction of the contact force could take place are identified and compared with typical situations in actual bridges. To this end, the simply supported bridge is excited at resonace by means of a theoretical train consisting of 15 equidistant axles. The mechanical characteristics of all axles (unsprung mass, semi-sprung mass, and primary suspension system) are identical. This theoretical train permits the identification of the key parameters having an influence on the wheel-rail contact forces. In addition, a real case of a 17.5 m bridges traversed by the Eurostar train is analysed and checked against the theoretical results. The influence of three fundamental parameters is investigated in great detail: a) the ratio of the fundamental frequency of the bridge and natural frequency of the primary suspension of the vehicle; b) the ratio of the total mass of the bridge and the semi-sprung mass of the vehicle and c) the ratio between the length of the bridge and the characteristic distance between consecutive axles. The main conclusions derived from the investigation are: The wheel-rail contact forces undergo oscillations during the passage of the axles over the bridge. During resonance, these oscillations are more severe for the rear wheels than for the front ones. If denotes the span of a simply supported bridge, and the characteristic distance between consecutive groups of loads, the lower the value of , the greater the oscillations of the contact forces at resonance. For or greater, no likelihood of loss of wheel-rail contact has been detected. The ratio between the frequency of the primary suspension of the vehicle and the fundamental frequency of the bridge is denoted by (frequency ratio), and the ratio of the semi-sprung mass of the vehicle (mass of the bogie) and the total mass of the bridge is denoted by (mass ratio). For any given frequency ratio, the greater the mass ratio, the greater the oscillations of the contact forces at resonance. The oscillations of the contact forces at resonance, and therefore the likelihood of loss of wheel-rail contact, present a minimum for approximately between 0.5 and 1. For lower or higher values of the frequency ratio the oscillations of the contact forces increase. Neglecting the possible effects of torsional vibrations, the metal or composite bridges with a low linear mass have been found to be the ones where the contact forces may suffer the most severe oscillations. If single-track, simply supported, composite or metal bridges were used in high-speed lines, and damping ratios below 1% were expected, the minimum contact forces at resonance could drop to dangerous values. Nevertheless, this kind of structures is very unusual in modern high-speed railway lines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the assessment of the contribution of the second flexural mode to the dynamic behaviour of simply supported railway bridges. Alluding to the works of other authors, it is suggested in some references that the dynamic behaviour of simply supported bridges could be adequately represented taking into account only the contribution of the fundamental flexural mode. On the other hand, the European Rail Research Institute (ERRI) proposes that the second mode should also be included whenever the associated natural frequency is lower than 30 Hz]. This investigation endeavours to clarify the question as much as possible by establishing whether the maximum response of the bridge, in terms of displacements, accelerations and bending moments, can be computed accurately not taking account of the contribution of the second mode. To this end, a dimensionless formulation of the equations of motion of a simply supported beam traversed by a series of equally spaced moving loads is presented. This formulation brings to light the fundamental parameters governing the behaviour of the beam: damping ratio, dimensionless speed $ \alpha$=VT/L, and L/d ratio (L stands for the span of the beam, V for the speed of the train, T represents the fundamental period of the bridge and d symbolises the distance between consecutive loads). Assuming a damping ratio equal to 1%, which is a usual value for prestressed high-speed bridges, a parametric analysis is conducted over realistic ranges of values of $ \alpha$ and L/d. The results can be extended to any simply supported bridge subjected to a train of equally spaced loads in virtue of the so-called Similarity Formulae. The validity of these formulae can be derived from the dimensionless formulation mentioned above. In the parametric analysis the maximum response of the bridge is obtained for one thousand values of speed that cover the range from the fourth resonance of the first mode to the first resonance of the second mode. The response at twenty-one different locations along the span of the beam is compared in order to decide if the maximum can be accurately computed with the sole contribution of the fundamental mode.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente proyecto de fin de carrera describe y analiza el estudio integral del efecto de las vibraciones producidas por voladuras superficiales realizadas en el proyecto del “Tercer Juego de Esclusas” ejecutado para la Expansión del Canal de Panamá. Se recopilan un total de 53 registros, data generada por el monitoreo de 7 sismógrafos en 10 voladuras de producción realizadas en el año 2010. El fenómeno vibratorio tiene dos parámetros fundamentales, la velocidad pico-partícula (PPV) y la frecuencia dominante, los cuales caracterizan cuan dañino puede ser éste frente a su influencia sobre las estructuras civiles; por ello, se pretende caracterizarlas y fundamentalmente predecirlas, lo que permitirá su debido control. En función a lo expuesto, el estudio realizado consta de dos partes, la primera describe el comportamiento del terreno mediante la estimación de la ley de atenuación de la velocidad pico-partícula a través del uso de la regresión lineal por mínimos cuadrados; la segunda detalla un procedimiento validable para la predicción de la frecuencia dominante y del pseudo-espectro de respuesta de velocidad (PVRS) basada en la teoría de Newmark & Hall. Se ha obtenido: (i) la ley de atenuación del terreno para distintos grados de fiabilidad, (ii) herramientas de diseño de voladuras basadas en la relación de carga – distancia, (iii) la demostración que los valores de PPV se ajustan a una distribución log-normal, (iv) el mapa de isolíneas de PPV para el área de estudio, (v) una técnica detallada y válida para la predicción de la frecuencia dominante y del espectro de respuesta, (vi) formulaciones matemáticas de los factores de amplificación para el desplazamiento, velocidad y aceleración, (vii) mapa de isolíneas de amplificación para el área de estudio. A partir de los resultados obtenidos se proporciona información útil para su uso en el diseño y control de las voladuras posteriores del proyecto. ABSTRACT This project work describes and analyzes the comprehensive study of the effect of the vibrations produced by surface blasting carried out in the "Third Set of Locks" project executed for the expansion of the Panama Canal. A total of 53 records were collected, with the data generated by the monitoring of 7 seismographs in 10 production blasts carried out in 2010. The vibratory phenomenon has two fundamental parameters, the peak-particle velocity (PPV) and the dominant frequency, which characterize how damaging this can be compared to their influence on structures, which is why this is intended to characterize and predict fundamentally, that which allows proper control. Based on the above, the study consists of two parts; the first describes the behavior of the terrain by estimating the attenuation law for peak-particle velocity by using the ordinary least squares regression analysis, the second details a validable procedure for the prediction of the dominant frequency and pseudo-velocity response spectrum (PVRS) based on the theory of Newmark & Hall. The following have been obtained: (i) the attenuation law of the terrain for different degrees of reliability, (ii) blast design tools based on charge-distance ratio, (iii) the demonstration that the values of PPV conform to a log-normal distribution, (iv) the map of isolines of PPV for the area of study (v) detailed and valid technique for predicting the dominant frequency response spectrum, (vi) mathematical formulations of the amplification factors for displacement, velocity and acceleration, (vii) amplification of isolines map for the study area. From the results obtained, the study provides useful information for use in the design and control of blasting for subsequent projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente proyecto parte de un programa utilizado en las prácticas de laboratorio en la asignatura Antenas y Compatibilidad Electromagnética del sexto semestre llamado SABOR, que pretende ser actualizado para que en las nuevas versiones de los sistemas operativos ofrecidos por la compañía Windows pueda ser operativo. El objetivo principal será diseñar e implementar nuevas funcionalidades así como desarrollar mejoras y corregir errores del mismo. Para su mejor entendimiento se ha creado una herramienta en entorno MATLAB para analizar uno de los tipos más comunes de Apertura que se utilizan actualmente, las bocinas. Dicha herramienta es una interfaz gráfica que tiene como entradas las variables elementales de diseño de la apertura como por ejemplo: dimensiones de la propia bocina o los parámetros generales comunes a todas ellas. A su vez, el software nos genera algunos de los parámetros de salida fundamentales de las antenas: Directividad, Ancho de haz, Centro de fase y Spillover. Para el correcto desarrollo del software se ha realizado numerosas pruebas con el fin de depurar y corregir errores con respecto a la anterior versión del SABOR. Por otra parte se ha hecho también hincapié en la funcionalidad del programa para que sea más intuitivo y evitar complejidades. El tipo de antena que se pretende estudiar es la bocina que consiste en una guía de onda en la cual el área de la sección se va incrementando progresivamente hasta un extremo abierto, que se comporta como una apertura. Se utilizan extensamente en satélites comerciales para coberturas globales desde órbitas geoestacionarias, pero el uso más común es como elemento de radiación para reflectores de antenas. Los tipos de bocinas que se van a examinar en la herramienta son: Sectorial H, Sectorial E, Piramidal, Cónica, Cónica Corrugada y Piramidal Corrugada. El proyecto está desarrollado de manera que pueda servir de información teórico-práctico de todo el software SABOR. Por ello, el documento además de revisar la teoría de las bocinas analizadas, mostrará la información relacionada con la programación orientado a objetos en entorno MATLAB cuyo objetivo propio es adquirir una nueva forma de pensamiento acerca del proceso de descomposición de problemas y desarrollo de soluciones de programación. Finalmente se ha creado un manual de autoayuda para dar soporte al software y se han incluido los resultados de diversas pruebas realizadas para poder observar todos los detalles de su funcionamiento, así como las conclusiones y líneas futuras de acción. ABSTRACT This Project comes from a program used in the labs of the subject Antennas and Electromagnetic Compatibility in the sixth semester called SABOR, which aims to be updated in order to any type of computer running a Windows operating systems(Windows 7 and subsequent versions). The main objectives are design and improve existing functionalities and develop new features. In addition, we will correct mistakes in earlier versions. For a better understanding a new custom tool using MATLAB environment has been created to analyze one of the most common types of apertura antenna which is used for the moment, horns. This tool is a graphical interface that has elementary design variables as a inputs, for example: Dimensions of the own horn or common general parameters of all horns. At the same time, the software generate us some of the fundamental parameters of antennas output like Directivity, Beamwidth, Phase centre and Spillover. This software has been performed numerous tests for the proper functioning of the Software and we have been cared in order to debug and correct errors that were detected in earlier versions of SABOR. In addition, it has also been emphasized the program's functionality in order to be more intuitive and avoiding unnecessary barriers or complexities. The type of antenna that we are going to study is the horn which consists of a waveguides which the section area has been gradually increasing to an open-ended, that behaves as an aperture. It is widely used in comercial satellites for global coverage from geostationary orbits. However, the most common use is radiating element for antenna reflectors. The types of horns which is going to be considered are: Rectangular H-plane sectorial, Rectangular E-plane sectorial, Rectangular Pyramidal, Circular, Corrugated Circular and Corrugated Pyramidal. The Project is developed so that it can be used as practical-theorical information around the SABOR software. Therefore, In addition to thoroughly reviewing the theory document of analyzed horns, it display information related to the object-oriented programming in MATLAB environment whose goal leads us to a new way of thinking about the process of decomposition of problems and solutions development programming. Finally, it has been created a self-help manual in order to support the software and has been included the results of different tests to observe all the details of their operations, as well as the conclusions and future action lines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A microtiter-based assay system is described in which DNA hairpin probes with dangling ends and single-stranded, linear DNA probes were immobilized and compared based on their ability to capture single-strand target DNA. Hairpin probes consisted of a 16 bp duplex stem, linked by a T2-biotin·dT-T2 loop. The third base was a biotinylated uracil (UB) necessary for coupling to avidin coated microtiter wells. The capture region of the hairpin was a 3′ dangling end composed of either 16 or 32 bases. Fundamental parameters of the system, such as probe density and avidin adsorption capacity of the plates were characterized. The target DNA consisted of 65 bases whose 3′ end was complementary to the dangling end of the hairpin or to the linear probe sequence. The assay system was employed to measure the time dependence and thermodynamic stability of target hybridization with hairpin and linear probes. Target molecules were labeled with either a 5′-FITC, or radiolabeled with [γ-33P]ATP and captured by either linear or hairpin probes affixed to the solid support. Over the range of target concentrations from 10 to 640 pmol hybridization rates increased with increasing target concentration, but varied for the different probes examined. Hairpin probes displayed higher rates of hybridization and larger equilibrium amounts of captured targets than linear probes. At 25 and 45°C, rates of hybridization were better than twice as great for the hairpin compared with the linear capture probes. Hairpin–target complexes were also more thermodynamically stable. Binding free energies were evaluated from the observed equilibrium constants for complex formation. Results showed the order of stability of the probes to be: hairpins with 32 base dangling ends > hairpin probes with l6 base dangling ends > 16 base linear probes > 32 base linear probes. The physical characteristics of hairpins could offer substantial advantages as nucleic acid capture moieties in solid support based hybridization systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context. VISTA Variables in the Vía Láctea (VVV) is one of six ESO Public Surveys using the 4 meter Visible and Infrared Survey Telescope for Astronomy (VISTA). The VVV survey covers the Milky Way bulge and an adjacent section of the disk, and one of the principal objectives is to search for new star clusters within previously unreachable obscured parts of the Galaxy. Aims. The primary motivation behind this work is to discover and analyze obscured star clusters in the direction of the inner Galactic disk and bulge. Methods. Regions of the inner disk and bulge covered by the VVV survey were visually inspected using composite JHKS color images to select new cluster candidates on the basis of apparent overdensities. DR1, DR2, CASU, and point spread function photometry of 10 × 10 arcmin fields centered on each candidate cluster were used to construct color–magnitude and color–color diagrams. Follow-up spectroscopy of the brightest members of several cluster candidates was obtained in order to clarify their nature. Results. We report the discovery of 58 new infrared cluster candidates. Fundamental parameters such as age, distance, and metallicity were determined for 20 of the most populous clusters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS-2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is sigma similar to 0.03 for redshifts less than 0.55 and worsens at higher redshift (similar to 0.06 for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present new measurements of the luminosity function (LF) of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) and the 2dF SDSS LRG and Quasar (2SLAQ) survey. We have carefully quantified, and corrected for, uncertainties in the K and evolutionary corrections, differences in the colour selection methods, and the effects of photometric errors, thus ensuring we are studying the same galaxy population in both surveys. Using a limited subset of 6326 SDSS LRGs (with 0.17 < z < 0.24) and 1725 2SLAQ LRGs (with 0.5 < z < 0.6), for which the matching colour selection is most reliable, we find no evidence for any additional evolution in the LRG LF, over this redshift range, beyond that expected from a simple passive evolution model. This lack of additional evolution is quantified using the comoving luminosity density of SDSS and 2SLAQ LRGs, brighter than M-0.2r - 5 log h(0.7) = - 22.5, which are 2.51 +/- 0.03 x 10(-7) L circle dot Mpc(-3) and 2.44 +/- 0.15 x 10(-7) L circle dot Mpc(-3), respectively (< 10 per cent uncertainty). We compare our LFs to the COMBO-17 data and find excellent agreement over the same redshift range. Together, these surveys show no evidence for additional evolution (beyond passive) in the LF of LRGs brighter than M-0.2r - 5 log h(0.7) = - 21 ( or brighter than similar to L-*).. We test our SDSS and 2SLAQ LFs against a simple 'dry merger' model for the evolution of massive red galaxies and find that at least half of the LRGs at z similar or equal to 0.2 must already have been well assembled (with more than half their stellar mass) by z similar or equal to 0.6. This limit is barely consistent with recent results from semi-analytical models of galaxy evolution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Private Finance Initiative (PFI) has become one of the UK’s most contentious public policies. Despite New Labour’s advocacy of PFI as a means of achieving better value for money, criticisms of PFI have centred on key issues such as a lack of cost effectiveness, exaggerated pricing of risk transfers, excessive private sector profits, inflexibility and cumbersome administrative arrangements. Nevertheless, PFI has persisted as a key
infrastructure procurement method in the UK and has been supported as such by successive governments, as well as influencing policy in the Republic of Ireland and other European Nations. This paper explores this paradoxical outcome in relation to the role played in the UK by the National Audit Office (NAO). Under pressure to justify its support for PFI, the Blair government sought support for its policies by encouraging the NAO to investigate issues relating to PFI as well as specific PFI projects. It would have been expected that in fulfilling its role as independent auditor, the NAO would have examined whether PFI projects could have been delivered more efficiently, effectively or economically through other means. Yet, in line with earlier research, we find evidence that the NAO failed to comprehensively assess
key issues such as the value for money of PFI projects, and in so doing effectively acted as a legitimator of PFI policy. Using concepts relating to legitimacy theory and the idea of framing, our paper looks into 67 NAO private finance reports published between 1997 and 2011, with the goal of identifying the preferences, values and ideology underpinning the
NAO’s view on PFI during this period. Our analysis suggests that the NAO sought to legitimise existing PFI practices via a selective framing of problems and questions. Utilising a longitudinal approach, our analysis further suggests that this patterns of selective framing persisted over an extended time period during which fundamental parameters of the policy (such as contract length, to name one of the most important issues) were rarely addressed.
Overall the NAO’ supportive stance toward PFI seems to have relied on 1) a focused on positive aspects of PFI, such as on time delivery or lessons learned, and 2) positive comments on aspects of PFI that were criticised elsewhere, such as the lack of flexibility of underlying contractual arrangements. Our paper highlights the possibility that, rather than providing for a critical assessment of existing policies, national auditing bodies can
contribute to the creation of legitimatory environments. In terms of accounting research we would suggests that the objectivity and independence of accounting watchdogs should not be taken for granted, and that instead a critical investigation of the biases which can characterise these bodies can contribute to a deeper understanding of the nature of lobbying networks in the modern state.