999 resultados para LUMINOSITY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta Tesis se cuantifica la variación cromática que producen los barnices en el color de los diferentes tipos de maderas de construcción, obteniendo un modelo matemático de predicción de color de la madera. Se analizan las prestaciones de dieciséis barnices, supuestamente incoloros, aplicados sobre veinte tipos de maderas, angiospermas y gimnospermas, de distintas densidades y latitudes. Ambos materiales son de uso frecuente en el ámbito de la construcción y de fácil localización en las tiendas y almacenes de ambos sectores. Se utilizan técnicas de descomposición cromática, mediante el empleo de microscopio óptico de reflexión, para poder obtener un abanico gráfico de histogramas con valores numéricos de luminosidad y composición cromática, y de esta forma comprobar que los supuestos barnices que se venden como incoloros, no son totalmente incoloros sino que muestran tendencias a virar hacia alguno de los colores básicos. En el proceso experimental de la Tesis, se aplican 16 barnices sobre 20 tipos de maderas, obteniéndose los histogramas de las campañas de fotografías realizadas con cinco años de diferencia, obteniéndose no solo la variación de color que producen los barnices sobre el original de la madera, sino además la influencia de un envejecimiento a los cinco años. La Tesis relaciona el tipo de barniz idóneo para cada tipo de madera, de modo que produzca la menor variación cromática. La Tesis además obtiene un modelo matemático que permite predecir el color final de la madera tratada en función del color inicial de la madera sin barnizar. Se propone en esta Tesis una recomendación de los productos a utilizar en cada uno de los tipos de madera en base a su color inicial. This Thesis quantifies the chromatic variation caused by varnishes in the colour of different types of timbers, obtaining a mathematical model for predicting the timber’s colour. The performance of sixteen varnishes, supposedly colourless, is analysed, applied on twenty types of timber, angiosperms and gymnosperms, of different densities and latitudes. Both materials are of frequent use in the construction field and easily located in the stores and warehouses of both sectors. Chromatic decomposition techniques are used, through the utilization of a reflection optical microscope, in order to obtain a graphic range of histograms with numerical values of luminosity and chromatic composition, and this way confirm that the alleged varnishes sold as colourless are not completely colourless but are prone to shift towards one of the basic colours. During the Thesis’ experimental procedure 16 varnishes are applied on 20 types of timber, obtaining the colour histograms of the photography campaigns carried out with a five years difference, resulting in not only the colour variation caused by the varnishes on the original timber, but also the influence of its ageing five years later. The Thesis links the right type of varnish for each kind of timber, so that as little chromatic variation as possible occurs. The Thesis obtains as well a mathematical model, which makes it possible to predict the final colour of the treated timber depending on the original colour of the timber with no varnish. This Thesis proposes a recommendation of the products to use on each type of timber on the basis of its original colour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Large Hadron Collider is the world’s largest and most powerful particle accelerator. The project is divided in phases. The first one goes from 2009 until 2020. The second phase will consist of the implementation of upgrades. One of the upgrades is to increase the ratio of collision, the luminosity. This objective is the main of one of the most important projects which is carrying out the upgrades: Hi-Lumi LHC project. Increasing luminosity could be done by using a new material in the superconductor magnets placed at the interaction points: Nb3Sn, instead of NbTi, the one being used right now. Before implementing it many aspects should be analysed. One of them is the induction magnetic field quality. The tool used so far has been ROXIE, software developed at CERN by S. Russenschuck. One of the main features of the programme is the time-transient analysis, which is based on three mathematical models. It is quite precise for fields above 1.5 Tesla. However, they are not very accurate for lower fields. Therefore the aim of this project is to evaluate a more accurate model: Classical Preisach Model of Hysteresis, in order to better analyse induced field quality in the new material Nb3Sn. Resumen: El Gran Colisionador de Hadrones es el mayor acelerador de partículas circular del mundo. Se trata de uno de los mayores proyectos de investigación. La primera fase de funcionamiento comprende desde 2009 a 2020, cuando comenzará la siguiente fase. Durante el primer periodo se han pensado mejoras para que puedan ser implementadas en la segunda fase. Una de ellas es el aumento del ratio de las colisiones entre protones por choque. Este es el principal objetivo de uno de los proyectos que está llevando a cabo las mejoras a ser implementadas en 2020: Hi- Lumi LHC. Se cambiarán los imanes superconductores de NbTi de las dos zonas principales de interacción, y se sustituirán por imanes de Nb3Sn. Esta sustituciónn conlleva un profundo estudio previo. Entre otros, uno de los factores a analizar es la calidad del campo magnético. La herramienta utilizada es el software desarrollado por S. Russenschuck en el CERN llamado ROXIE. Está basado en tres modelos de magnetización, los cuales son precisos para campos mayores de 1.5 T. Sin embargo, no lo son tanto para campos menores. Con este proyecto se pretende evaluar la implementación de un cuarto modelo, el modelo clásico de histéresis de Preisach que permita llevar a cabo un mejor análisis de la calidad del campo inducido por el futuro material a utilizar en algunos de los imanes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En el proyecto se lleva a cabo un estudio práctico sobre dos escenarios donde intervienen dispositivos relacionados con el Internet de las cosas. También se puede situar como una solución de comunicación M2M. Comunicación máquina a máquina implica un sistema central que es capaz de conectarse con otros sistemas en varios lugares. La conexión permite que el sistema central recoja o envíe datos a cada lugar remoto para su procesamiento. El primer escenario consta de la configuración y montaje de un microcontrolador conocido como Waspmote que se encarga de recoger variables atmosféricas gracias a un conjunto de sensores y enviar los datos a un router multiprotocolo Meshlium mediante tecnología Zigbee, un tipo de red orientada a redes de sensores. Este montaje tiene como fin instalar una estación meteorológica en el campus de la universidad y poder almacenar y administrar sus datos. La segunda parte dos dispositivos de hardware libre como son un Arduino con capacidad GPRS y una RaspberryPi conectada a la red cableada enviaran datos por ejemplo de temperatura y luminosidad a una red social de sensores conocida como Xively, gestionaremos nuestros dispositivos sobre esta plataforma gratuita, que nos permite dar de alta dispositivos, almacenar y representar los datos en tiempo real y consultarlos vía Web o mediante una aplicación móvil realizada para este caso por medio de funciones ofrecidas por Xively. He diseñado una aplicación Android que permite la consulta de datos y administración de sensores por un usuario, intenta abstraer al usuario de la complejidad técnica y acercar los objetos conectados, en este caso sensores. Se han detallado las configuraciones y el proceso de instalación de todos los dispositivos. Se explican conceptos para entender las tecnologías de comunicación, Zigbee y Http, este protocolo participara a nivel de aplicación realizando peticiones o enviando datos, administrando la capacidad y por tanto ahorro. ABSTRACT. The project takes a practical study on two scenarios which involved related to the Internet of Things devices. It can also be placed as a M2M communication solution. Machine to machine communication involves a central system that is able to connect with other systems in several places. The connection allows the central system to collect or send data to each remote location for processing. The first stage consists of the configuration and setup of a microcontroller known as Waspmote which is responsible to collect atmospheric variables by a set of sensors and send the data to a multiprotocol router Meshlium by Zigbee technology, a type of sensor networks oriented network. This assembly aims to set up a weather station on the campus of the university and to store and manage their data. The second part two devices free hardware like Arduino with GPRS capacity and RaspberryPi connected to the wired network send data, temperature and luminosity to a social network of sensors known as Xively, manage our devices on this free platform, which allows us to register devices, store and display data in real time and consult the web or through a mobile application on this case by means of functions offered by Xively. I have designed an Android application that allows data consultation and management of sensors by a user, the user tries to abstract the technical complexity and bring the connected objects, in this case sensors. Were detailed settings and the installation of all devices. Concepts are explained to understand communication technologies, Zigbee and Http, this protocol participate performing application-level requests or sending data, managing capacity and therefore savings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a communication interface between supervisory low-cost mobile robots and domestic Wireless Sensor Network (WSN) based on the Zig Bee protocol from different manufacturers. The communication interface allows control and communication with other network devices using the same protocol. The robot can receive information from sensor devices (temperature, humidity, luminosity) and send commands to actuator devices (lights, shutters, thermostats) from different manufacturers. The architecture of the system, the interfaces and devices needed to establish the communication are described in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El atrio incorporado en los edificios ha sido un recurso espacial que tempranamente se difundió a nivel global, siendo adoptado por las distintas arquitecturas locales en gran parte del mundo. Su masificación estuvo favorecida primero por la rápida evolución de la tecnología del acero y el vidrio, a partir del siglo XIX, y en segundo termino por el posterior desarrollo del hormigón armado. Otro aspecto que explica tal aceptación en la arquitectura contemporánea, es de orden social y radica en la llamativa cavidad del espacio describiendo grandes dimensiones y favoreciendo con ello, el desarrollo de una multiplicidad de usos en su interior que antes eran impensados. Al interior del atrio, la luz natural es clave en las múltiples vivencias que alberga y sea tal vez la condición ambiental más valorada, ya que entrega una sensación de bienestar al conectarnos visualmente con el ambiente natural. Por esta razón de acuerdo al método hipotético deductivo, se evaluaron los efectos de la configuración geométrica, la cubierta y la orientación en el desempeño de la iluminación natural en la planta baja, a partir un modelo extraído desde el inventario de los edificios atrio construidos en Santiago de Chile, en los últimos 30 años que fue desarrollado en el capitulo 2. El análisis cuantitativo de los edificios inventariados se elaboró en el capítulo 3, considerando las dimensiones de los atrios. Simultáneamente fueron clasificados los aspectos constructivos, los materiales y las características del ambiente interior de cada edificio. En esta etapa además, fueron identificadas las variables de estudio de las proporciones geométricas de la cavidad del atrio con los coeficientes de aspecto de las proporciones, en planta (PAR), en corte (SAR) y de la cavidad según (WI), (AR) y (RI). Del análisis de todos estos parámetros se extrajo el modelo de prueba. El enfoque del estudio del capítulo 4 fue la iluminación natural, se revisaron los conceptos y el comportamiento en el atrio, a partir de un modelo físico construido a escala para registro de la iluminancia bajo cielo soleado de la ciudad. Más adelante se construyó el modelo en ambiente virtual, relacionando las variables determinadas por la geometría de la cavidad y el cerramiento superior; examinándose de esta manera distintas transparencias, proporciones de apertura, en definitiva se evaluó un progresivo cerramiento de las aberturas, verificando el ingreso de la luz y disponibilidad a nivel de piso con la finalidad, de proveer lineamientos útiles en una primera etapa del diseño arquitectónico. Para el análisis de la iluminación natural se revisaron diferentes métodos de cálculo con el propósito de evaluar los niveles de iluminancia en un plano horizontal al interior del atrio. El primero de ellos fue el Factor de Luz Día (FLD) que corresponde, a la proporción de la iluminancia en un punto de evaluación interior respecto, la cantidad proveniente del exterior bajo cielo nublado, a partir de la cual se obtuvo resultados que revelaron la alta luminosidad del cielo nublado de la ciudad. Además fueron evaluadas las recientes métricas dinámicas que dan cuenta, de la cantidad de horas en las cuales de acuerdo a los extensos registros meteorológico de la ciudad, permitieron obtener el porcentajes de horas dentro de las cuales se cumplió el estándar de iluminancia requerido, llamado autonomía lumínica (DA) o mejor aún se permanece dentro de un rango de comodidad visual en el interior del atrio referido a la iluminancia diurna útil (UDI). En el Capítulo 5 se exponen los criterios aplicados al modelo de estudio y cada una de las variantes de análisis, además se profundizó en los antecedentes y procedencia de las fuentes de los registros climáticos utilizados en las simulaciones llevadas a cabo en el programa Daysim operado por Radiance. Que permitieron evaluar el desempeño lumínico y la precisión, de cada uno de los resultados para comprobar la disponibilidad de iluminación natural a través de una matriz. En una etapa posterior se discutieron los resultados, mediante la comparación de los datos logrados según cada una de las metodologías de simulación aplicada. Finalmente se expusieron las conclusiones y futuras lineas de trabajo, las primeras respecto el dominio del atrio de cuatro caras, la incidencia del control de cerramiento de la cubierta y la relación establecida con la altura; indicando en lo específico que las mediciones de iluminancia bajo el cielo soleado de verano, permitieron aclarar, el uso de la herramienta de simulación y método basado en el clima local, que debido a su reciente desarrollo, orienta a futuras líneas de trabajo profundizando en la evaluación dinámica de la iluminancia contrastado con monitorización de casos. ABSTRACT Atriums incorporated into buildings have been a spatial resource that quickly spread throughout the globe, being adopted by several local architecture methods in several places. Their widespread increase was highly favored, in the first place, with the rapid evolution of steel and glass technologies since the nineteen century, and, in second place, by the following development of reinforced concrete. Another issue that explains this success into contemporary architecture is associated with the social approach, and it resides in the impressive cavity that describes vast dimensions, allowing the development of multiple uses in its interior that had never been considered before. Inside the atrium, daylight it is a key element in the many experiences that involves and it is possibly the most relevant environmental factor, since it radiates a feeling of well-being by uniting us visually with the natural environment. It is because of this reason that, following the hypothetical deductive method, the effects in the performance of daylight on the floor plan were evaluated considering the geometric configuration, the deck and orientation factors. This study was based in a model withdrawn from the inventory of atrium buildings that were constructed in Santiago de Chile during the past thirty years, which will be explained later in chapter 2. The quantitative analysis of the inventory of those buildings was elaborated in chapter 3, considering the dimensions of the atriums. Simultaneously, several features such as construction aspects, materials and environmental qualities were identified inside of each building. At this stage, it were identified the variables of the geometric proportions of the atrium’s cavity with the plan aspect ratio of proportions in first plan (PAR), in section (SAR) and cavity according to well index (WI), aspect ratio (AR) and room index (RI). An experimental model was obtained from the analysis of all the mentioned parameters. The main focus of the study developed in chapter 4 is daylight. The atrium’s concept and behavior were analyzed from a physical model built under scale to register the illuminances under clear, sunny sky of the city. Later on, this physical model was built in a virtual environment, connecting the variables determined by the geometry of the cavity and the superior enclosure, allowing the examination of transparencies and opening proportions. To summarize, this stage consisted on evaluating a progressive enclosure of the openings, checking the access of natural light and its availability at the atrium floor, in an effort to provide useful guidelines during the first stage of the architectural design. For the analysis of natural lighting, several calculations methods were used in order to determine the levels of illuminances in a horizontal plane inside of the atrium. The first of these methods is the Daylight Factor (DF), which consists in the proportion of light in an evaluation interior place with the amount of light coming from the outside in a cloudy day. Results determined that the cloudy sky of the city has high levels of luminosity. In addition, the recent dynamic metrics were evaluated which reflects the hours quantity. According to the meteorological records of the city’s climate, the standard of illuminance- a standard measure called Daylight Autonomy (DA) – was met. This is even better when the results stay in the line of visual convenience within the atrium, which is referred to as Useful Daylight Illuminance (UDI). In chapter 5, it was presented the criteria applied to the study model and on each of the variants of the analysis. Moreover, the information of the climate records used for the simulations - carried out in the Daysim program managed by Radiance – are detailed. These simulations allowed the observation of the daylight performance and the accuracy of each of the results to confirm the availability of natural light through a matrix. In a later stage, the results were discussed comparing the collected data in each of the methods of simulation used. Finally, conclusions and further discussion are presented. Overall, the four side atrium’s domain and the effect of the control of the cover’s enclosure. Specifically, the measurements of the daylight under summer’s clear, sunny sky allowing clarifying the use of the simulation tool and the method based on the local climate. This method allows defining new and future lines of work deepening on the dynamic of the light in contrast with the monitoring of the cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The physical validity of the hypothesis of (redshift-dependent) luminosity evolution in galaxies is tested by statistical analysis of an intensively studied complete high-redshift sample of normal galaxies. The necessity of the evolution hypothesis in the frame of big-bang cosmology is confirmed at a high level of statistical significance; however, this evolution is quantitatively just as predicted by chronometric cosmology, in which there is no such evolution. Since there is no direct observational means to establish the evolution postulated in big-bang studies of higher-redshift galaxies, and the chronometric predictions involve no adjustable parameters (in contrast to the two in big-bang cosmology), the hypothesized evolution appears from the standpoint of conservative scientific methodology as a possible theoretical artifact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observers have found a small number of lithium-depleted halo stars in the temperature range of the Spite plateau. The current status of the mass-loss hypothesis for producing the observed lithium dip in Population (Pop) I stars is briefly discussed and extended to Pop II stars as a possible explanation for these halo objects. Based on detections of F-type main-sequence variables, mass loss is assumed to occur in a narrow temperature region corresponding to this “instability strip.” As Pop II main-sequence stars evolve to the blue, they enter this narrow temperature region, then move back through the lower temperature area of the Spite plateau. If 0.05 M⊙ (solar mass) or more have been lost, they will show lithium depletion. This hypothesis affects the lithium-to- beryllium abundance, the ratio of high- to low-lithium stars, and the luminosity function. Constraints on the mass-loss hypothesis due to these effects are discussed. Finally, mass loss in this temperature range would operate in stars near the turnoff of metal-poor globular clusters, resulting in apparent ages 2 to 3 Gyr (gigayears) older than they actually are.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review two new methods to determine the age of globular clusters (GCs). These two methods are more accurate than the classical isochrone fitting technique. The first method is based on the morphology of the horizontal branch and is independent of the distance modulus of the globular cluster. The second method uses a careful binning of the stellar luminosity function and determines simultaneously the distance and age of the GC. We find that the oldest galactic GCs have an age of 13.5 ± 2 gigayears (Gyr). The absolute minimum age for the oldest GCs is 10.5 Gyr (with 99% confidence) and the maximum 16.0 Gyr (with 99% confidence). Therefore, an Einstein–De Sitter Universe (Ω = 1) is not totally ruled out if the Hubble constant is about 65 ± 10 Km s−1 Mpc−1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On fine scales, caustics produced with white light show vividly colored diffraction fringes. For caustics described by the elementary catastrophes of singularity theory, the colors are characteristic of the type of singularity. We study the diffraction colors of the fold and cusp catastrophes. The colors can be simulated computationally as the superposition of monochromatic patterns for different wavelengths. Far from the caustic, where the luminosity contrast is negligible, the fringe colors persist; an asymptotic theory explains why. Experiments with caustics produced by refraction through irregular bathroom-window glass show good agreement with theory. Colored fringes near the cusp reveal fine lines that are not present in any of the monochromatic components; these lines are explained in terms of partial decoherence between rays with widely differing path differences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictions for the apparent velocity statistics under simple beaming models are presented and compared to the observations. The potential applications for tests of unification models and for cosmology (source counts, measurements of the Hubble constant H0 and the deceleration parameter q0) are discussed. First results from a large homogeneous survey are presented. The data do not show compelling evidence for the existence of intrinsically different populations of galaxies, BL Lacertae objects, or quasars. Apparent velocities betaapp in the range 1-5 h-1, where h = H0/100 km.s-1.Mpc-1 [1 megaparsec (Mpc) = 3.09 x 10(22) m], occur with roughly equal frequency; higher values, up to betaapp = 10 h-1, are rather more scarce than appeared to be the case from earlier work, which evidently concentrated on sources that are not representative of the general population. The betaapp distribution suggests that there might be a skewed distribution of Lorentz factors over the sample, with a peak at gammab approximately 2 h-1 and a tail up to at least gammab approximately 10 h-1. There appears to be a clearly rising upper envelope to the betaapp distribution when plotted as a function of observed 5-GHz luminosity; a combination of source counts and the apparent velocity statistics in a larger sample could provide much insight into the properties of radio jet sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I investigate the issue of whether the various subclasses of radio-loud galaxies are intrinsically the same but have been classified differently mainly due to their being viewed from different directions. Evidence for the two key elements of this popular version of the "unified scheme (US)," relativistic jets and nuclear tori, is updated. The case for the torus opening angle increasing with the radio luminosity of the active galactic nucleus (AGN) is freshly argued. Radio-loud AGN are particularly suited for testing the US, since their structures and polarization properties on different scales, as well as their overall radio sizes, provide useful statistical indicators of the relative orientations of their various subclasses. I summarize recent attempts to bring under a single conceptual framework the USs developed for radio-moderate [Fanaroff-Riley type I (FRI)] and radio-powerful (FRII) AGN. By focusing on FRII radio sources, I critically examine the recent claims of conflict with the US, based on the statistics of radio-size measurements for large, presumably orientation-independent, samples with essentially complete optical identifications. Possible ways of reconciling these results, and also the ones based on very-long-baseline radio interferometry polarimetric observations, with the US are pointed out. By incorporating a highly plausible temporal evolution of radio source properties into the US, I outline a scenario that allows the median linear size of quasars to approach, or even exceed, that of radio galaxies, as samples with decreasing radio luminosity are observed. Thus, even though a number of issues remain to be fully resolved, the scope of unified models continues to expand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The difficulties perceived in the orientation-based unified scheme models, when confronted with the observational data, are pointed out. It is shown that in meter-wavelength selected samples, which presumably are largely free of an orientation bias, the observed numbers of quasars versus radio galaxies are not in accordance with the expectations of the unified scheme models. The observed number ratios seem to depend heavily on the redshift, fluxdensity, or radio luminosity levels of the selected sample. This cannot be explained within the simple orientation-based unified scheme with a fixed average value of the half-opening angle (c approximately 45 degrees ) for the obscuring torus that supposedly surrounds the nuclear optical continuum and the broad-line regions. Further, the large differences seen between radio galaxies and quasars in their size distributions in the luminosity-redshift plane could not be accommodated even if I were to postulate some suitable cosmological evolution of the opening angle of the torus. Some further implications of these observational results for the recently proposed modified versions of the unified scheme model are pointed out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The compact steep-spectrum sources (CSSs) are an interesting class of objects which are of subgalactic dimensions; they occur more frequently in high-frequency surveys because their spectra often turn over at lower frequencies. We have estimated the symmetry parameters of a well-defined sample of CSSs and compared these with the larger 3CR sources of similar luminosity to understand the evolution and the consistency of CSSs with the unified scheme. We suggest that the majority of CSSs are likely to be young sources advancing outward through an asymmetric, inhomogeneous environment to form the larger ones. The radio properties of the CSSs are consistent with the unified scheme, where the axes of the quasars are seen closer to the line of sight while the radio galaxies lie closer to the plane of the sky. We discuss how radio polarization observations may be used to probe whether the physical conditions in the central regions of the CSSs are different from the larger ones. We present a simple scenario where the depolarization and high rotation measures seen in many CSSs can be consistent with the low rotation measures of cores in the more extended quasars and suggest further observations to test this scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the characteristics of the rapidly rotating molecular disk in the nucleus of the mildly active galaxy NGC4258. The morphology and kinematics of the disk are delineated by the point-like watervapor emission sources at 1.35-cm wavelength. High angular resolution [200 microas where as is arcsec, corresponding to 0.006 parsec (pc) at 6.4 million pc] and high spectral resolution (0.2 km.s-1 or nu/Deltanu = 1.4 x 10(6)) with the Very-Long-Baseline Array allow precise definition of the disk. The disk is very thin, but slightly warped, and is viewed nearly edge-on. The masers show that the disk is in nearly perfect Keplerian rotation within the observable range of radii of 0.13-0.26 pc. The approximately random deviations from the Keplerian rotation curve among the high-velocity masers are approximately 3.5 km.s-1 (rms). These deviations may be due to the masers lying off the midline by about +/-4 degrees or variations in the inclination of the disk by +/-4 degrees. Lack of systematic deviations indicates that the disk has a mass of <4 x 10(6) solar mass (M[symbol: see text]). The gravitational binding mass is 3.5 x 10(7) M[symbol: see text], which must lie within the inner radius of the disk and requires that the mass density be >4 x 10(9) M[symbol: see text].pc-3. If the central mass were in the form of a star cluster with a density distribution such as a Plummer model, then the central mass density would be 4 x 10(12) M[symbol: see text].pc-3. The lifetime of such a cluster would be short with respect to the age of the galaxy [Maoz, E. (1995) Astrophys. J. Lett. 447, L91-L94]. Therefore, the central mass may be a black hole. The disk as traced by the systemic velocity features is unresolved in the vertical direction, indicating that its scale height is <0.0003 pc (hence the ratio of thickness to radius, H/R, is <0.0025). For a disk in hydrostatic equilibrium the quadrature sum of the sound speed and Alfven velocity is <2.5 km.s-1, so that the temperature of the disk must be <1000 K and the toroidal magnetic field component must be <250 mG. If the molecular mass density in the disk is 10(10) cm-3, then the disk mass is approximately 10(4) M[symbol: see text], and the disk is marginally stable as defined by the Toomre stability parameter Q (Q = 6 at the inner edge and 1 at the outer edge). The inward drift velocity is predicted to be <0.007 km.s-1, for a viscosity parameter of 0.1, and the accretion rate is <7 x 10(-5) M[symbol: see text].yr-1. At this value the accretion would be sufficient to power the nuclear x-ray source of 4 x 10(40) ergs-1 (1 erg = 0.1 microJ). The volume of individual maser components may be as large as 10(46) cm3, based on the velocity gradients, which is sufficient to supply the observed luminosity. The pump power undoubtedly comes from the nucleus, perhaps in the form of x-rays. The warp may allow the pump radiation to penetrate the disk obliquely [Neufeld, D. A. & Maloney, P. R. (1995) Astrophys. J. Lett. 447, L17-L19]. A total of 15 H2O megamasers have been identified out of >250 galaxies searched. Galaxy NGC4258 may be the only case where conditions are optimal to reveal a well-defined nuclear disk. Future measurement of proper motions and accelerations for NGC4258 will yield an accurate distance and a more precise definition of the dynamics of the disk

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observations of complete flux density limited samples of powerful extragalactic radio sources by very-long-baseline interferometry enable us to study the evolution of these objects over the range of linear scales from 1 parsec to 15 kiloparsees (1 parsec = 3.09 x 10(18) cm). The observations are consistent with the unifying hypothesis that compact symmetric objects evolve into compact steep-spectrum doubles, which in turn evolve into large-scale Fanaroff-Riley class II objects. It is suggested that this is the primary evolutionary track of powerful extragalactic radio sources. In this case there must be significant luminosity evolution in these objects, but little velocity evolution, as they expand from 1 parsec to several hundred kiloparsecs in overall size.