944 resultados para Fibonacci series and golden ratio
Resumo:
Polymer nanocomposites, specifically nanoclay-reinforced polymers, have attracted great interest as matrix materials for high temperature composite applications. Nanocomposites require relatively low dispersant loads to achieve significant property enhancements. These enhancements are mainly a consequence of the interfacial effects that result from dispersing the silicate nanolayers in the polymer matrix and the high in-plane strength, stiffness and aspect ratio of the lamellar nanoparticles. The montmorillonite (MMT) clay, modified with organic onium ions with long alkyl chains as Cloisites, has been widely used to obtain nanocomposites. The presence of reactive groups in organic onium ions can form chemical bonds with the polymer matrix which favours a very high exfoliation degree of the clay platelets in the nanocomposite (1,2)
Resumo:
Lately, several researchers have pointed out that climate change is expected to increase temperatures and lower rainfall in Mediterranean regions, simultaneously increasing the intensity of extreme rainfall events. These changes could have consequences regarding rainfall regime, erosion, sediment transport and water quality, soil management, and new designs in diversion ditches. Climate change is expected to result in increasingly unpredictable and variable rainfall, in amount and timing, changing seasonal patterns and increasing the frequency of extreme weather events. Consequently, the evolution of frequency and intensity of drought periods is of most important as in agro-ecosystems many processes will be affected by them. Realising the complex and important consequences of an increasing frequency of extreme droughts at the Ebro River basin, our aim is to study the evolution of drought events at this site statistically, with emphasis on the occurrence and intensity of them. For this purpose, fourteen meteorological stations were selected based on the length of the rainfall series and the climatic classification to obtain a representative untreated dataset from the river basin. Daily rainfall series from 1957 to 2002 were obtained from each meteorological station and no-rain period frequency as the consecutive numbers of days were extracted. Based on this data, we study changes in the probability distribution in several sub-periods. Moreover we used the Standardized Precipitation Index (SPI) for identification of drought events in a year scale and then we use this index to fit log-linear models to the contingency tables between the SPI index and the sub-periods, this adjusted is carried out with the help of ANOVA inference.
Resumo:
El objetivo de este proyecto de investigación es comparar dos técnicas matemáticas de aproximación polinómica, las aproximaciones según el criterio de mínimos cuadrados y las aproximaciones uniformes (“minimax”). Se describen tanto el mercado actual del cobre, con sus fluctuaciones a lo largo del tiempo, como los distintos modelos matemáticos y programas informáticos disponibles. Como herramienta informática se ha seleccionado Matlab®, cuya biblioteca matemática es muy amplia y de uso muy extendido y cuyo lenguaje de programación es suficientemente potente para desarrollar los programas que se necesiten. Se han obtenido diferentes polinomios de aproximación sobre una muestra (serie histórica) que recoge la variación del precio del cobre en los últimos años. Se ha analizado la serie histórica completa y dos tramos significativos de ella. Los resultados obtenidos incluyen valores de interés para otros proyectos. Abstract The aim of this research project is to compare two mathematical models for estimating polynomial approximation, the approximations according to the criterion of least squares approximations uniform (“Minimax”). Describes both the copper current market, fluctuating over time as different computer programs and mathematical models available. As a modeling tool is selected main Matlab® which math library is the largest and most widely used programming language and which is powerful enough to allow you to develop programs that are needed. We have obtained different approximating polynomials, applying mathematical methods chosen, a sample (historical series) which indicates the fluctuation in copper prices in last years. We analyzed the complete historical series and two significant sections of it. The results include values that we consider relevant to other projects
Resumo:
The calibration results (the transfer function) of an anemometer equipped with several cup rotors were analyzed and correlated with the aerodynamic forces measured on the isolated cups in a wind tunnel. The correlation was based on a Fourier analysis of the normal-to-the-cup aerodynamic force. Three different cup shapes were studied: typical conical cups, elliptical cups and porous cups (conical-truncated shape). Results indicated a good correlation between the anemometer factor, K, and the ratio between the first two coefficients in the Fourier series decomposition of the normal-to-the-cup aerodynamic force
Resumo:
El Trabajo de Fin de Grado aborda el tema del Descubrimiento de Conocimiento en series numéricas temporales, abordando el análisis de las mismas desde el punto de vista de la semántica de las series. La gran mayoría de trabajos realizados hasta la fecha en el campo del análisis de series temporales proponen el análisis numérico de los valores de la serie, lo que permite obtener buenos resultados pero no ofrece la posibilidad de formular las conclusiones de forma que se puedan justificar e interpretar los resultados obtenidos. Por ello, en este trabajo se pretende crear una aplicación que permita realizar el análisis de las series temporales desde un punto de vista cualitativo, en contraposición al tradicional método cuantitativo. De esta forma, quedarán recogidos todos los elementos relevantes de la serie temporal que puedan servir de estudio en un futuro. Para abordar el objetivo propuesto se plantea un mecanismo para extraer de la serie temporal la información que resulta de interés para su análisis. Para poder hacerlo, primero se formaliza el conjunto de comportamientos relevantes del dominio, que serán los símbolos a mostrar en la salida de la aplicación. Así, el método que se ha diseñado e implementado transformará una serie temporal numérica en una secuencia simbólica que recoge toda la semántica de la serie temporal de partida y resulta más intuitiva y fácil de interpretar. Una vez que se dispone de un mecanismo para transformar las series numéricas en secuencias simbólicas, se pueden plantear todas las tareas de análisis sobre dichas secuencias de símbolos. En este trabajo, aunque no se entra en este post-análisis de estas series, sí se plantean distintos campos en los que se puede avanzar en el futuro. Por ejemplo, se podría hacer una medida de la similitud entre dos secuencias simbólicas como punto de partida para la tarea de comparación o la creación de modelos de referencia para análisis posteriores de las series temporales. ---ABSTRACT---This Final-year Project deals with the topic of Knowledge Discovery in numerical time series, addressing time series analysis from the viewpoint of the semantics of the series. Most of the research conducted to date in the field of time series analysis recommends analysing the values of the series numerically. This provides good results but prevents the conclusions from being formulated to allow justification and interpretation of the results. Thus, the purpose of this project is to create an application that allows the analysis of time series, from a qualitative point of view rather than a quantitative one. This way, all the relevant elements of the time series will be gathered for future studies. The design of a mechanism to extract the information that is of interest from the time series is the first step towards achieving the proposed objective. To do this, all the key behaviours in the domain are set, which will be the symbols shown in the output. The designed and implemented method transforms a numerical time series into a symbolic sequence that takes in all the semantics of the original time series and is more intuitive and easier to interpret. Once a mechanism for transforming the numerical series into symbolic sequences is created, the symbolic sequences are ready for analysis. Although this project does not cover a post-analysis of these series, it proposes different fields in which research can be done in the future. For instance, comparing two different sequences to measure the similarities between them, or the creation of reference models for further analysis of time series.
Resumo:
Los sistemas de telecomunicación que trabajan en frecuencias milimétricas pueden verse severamente afectados por varios fenómenos atmosféricos, tales como la atenuación por gases, nubes y el centelleo troposférico. Una adecuada caracterización es imprescindible en el diseño e implementación de estos sistemas. El presente Proyecto Fin de Grado tiene como objetivo el estudio estadístico a largo plazo de series temporales de centelleo troposférico en enlaces de comunicaciones en trayecto inclinado sobre la banda Ka a 19,7 GHz. Para la realización de este estudio, se dispone como punto de partida de datos experimentales procedentes de la baliza en banda Ka a 19,7 GHz del satélite Eutelsat Hot Bird 13A que han sido recopilados durante siete años entre julio de 2006 y junio de 2013. Además, se cuenta como referencia teórica con la aplicación práctica del método UIT-R P.618-10 para el modelado del centelleo troposférico en sistemas de telecomunicación Tierra-espacio. Esta información permite examinar la validez de la aplicación práctica del método UIT-R P.1853-1 para la síntesis de series temporales de centelleo troposférico. Sobre este sintetizador se variará la serie temporal de contenido integrado de vapor de agua en una columna vertical por datos reales obtenidos de bases de datos meteorológicas ERA-Interim y GNSS con el objetivo de comprobar el impacto de este cambio. La primera parte del Proyecto comienza con la exposición de los fundamentos teóricos de los distintos fenómenos que afectan a la propagación en un enlace por satélite, incluyendo los modelos de predicción más importantes. Posteriormente, se presentan los fundamentos teóricos que describen las series temporales, así como su aplicación al modelado de enlaces de comunicaciones. Por último, se describen los recursos específicos empleados en la realización del experimento. La segunda parte del Proyecto se inicia con la muestra del proceso de análisis de los datos disponibles que, más tarde, permiten obtener resultados que caracterizan el centelleo troposférico en ausencia de precipitación, o centelleo seco, para los tres casos de estudio sobre los datos experimentales, sobre el modelo P.618-10 y sobre el sintetizador P.1853-1 con sus modificaciones. Al haber mantenido en todo momento las mismas condiciones de frecuencia, localización, clima y periodo de análisis, el estudio comparativo de los resultados obtenidos permite extraer las conclusiones oportunas y proponer líneas futuras de investigación. ABSTRACT. Telecommunication systems working in the millimetre band are severely affected by various atmospheric impairments, such as attenuation due to clouds, gases and tropospheric scintillation. An adequate characterization is essential in the design and implementation of these systems. This Final Degree Project aims to a long-term statistical study of time series of tropospheric scintillation on slant path communications links in Ka band at 19.7 GHz. To carry out this study, experimental data from the beacon in Ka band at 19.7 GHz for the Eutelsat Hot Bird 13A satellite are available as a starting point. These data have been collected during seven years between July 2006 and June 2013. In addition, the practical application of the ITU-R P.618-10 method for tropospheric scintillation modeling of Earth-space telecommunication systems has been the theoretical reference used. This information allows us to examine the validity of the practical application of the ITU-R P.1853-1 method for tropospheric scintillation time series synthesis. In this synthesizer, the time series of water vapor content in a vertical column will be substituted by actual data from meteorological databases ERA-Interim and GNSS in order to test the impact of this change. The first part of the Project begins with the exposition of the theoretical foundations of the various impairments that affect propagation in a satellite link, including the most important prediction models. Subsequently, it presents the theoretical foundations that describe the time series, and its application to communication links modeling. Finally, the specific resources used in the experiment are described. The second part of the Project starts with the exhibition of the data analysis process to obtain results that characterize the tropospheric scintillation in the absence of precipitation, or dry scintillation, for the three study cases on the experimental data, on the P.618-10 model and on the P.1853-1 synthesizer with its modifications. The fact that the same conditions of frequency, location, climate and period of analysis are always maintained, allows us to draw conclusions and propose future research lines by comparing the results.
Resumo:
La presente Tesis plantea una metodología de análisis estadístico de roturas de tubería en redes de distribución de agua, que analiza la relación entre las roturas y la presión de agua y que propone la implantación de una gestión de presiones que reduzca el número de roturas que se producen en dichas redes. Las redes de distribución de agua se deterioran y una de sus graves consecuencias es la aparición de roturas frecuentes en sus tuberías. Las roturas llevan asociados elevados costes sociales, económicos y medioambientales y es por ello por lo que las compañías gestoras del agua tratan de reducirlas en la medida de lo posible. Las redes de distribución de agua se pueden dividir en zonas o sectores que facilitan su control y que pueden ser independientes o aislarse mediante válvulas, como ocurre en las redes de países más desarrollados, o pueden estar intercomunicados hidráulicamente. La implantación de una gestión de presiones suele llevarse a cabo a través de las válvulas reductoras de presión (VPR), que se instalan en las cabeceras de estos sectores y que controlan la presión aguas abajo de la misma, aunque varíe su caudal de entrada. Los métodos más conocidos de la gestión de presiones son la reducción de presiones, que es el control más habitual, el mantenimiento de la presión, la prevención y/o alivio de los aumentos repentinos de presión y el establecimiento de un control por alturas. A partir del año 2005 se empezó a reconocer el efecto de la gestión de presiones sobre la disminución de las roturas. En esta Tesis, se sugiere una gestión de presiones que controle los rangos de los indicadores de la presión de cabecera que más influyan en la probabilidad de roturas de tubería. Así, la presión del agua se caracteriza a través de indicadores obtenidos de la presión registrada en la cabecera de los sectores, debido a que se asume que esta presión es representativa de la presión de operación de todas las tuberías porque las pérdidas de carga son relativamente bajas y las diferencias topográficas se tienen en cuenta en el diseño de los sectores. Y los indicadores de presión, que se pueden definir como el estadístico calculado a partir de las series de la presión de cabecera sobre una ventana de tiempo, pueden proveer la información necesaria para ayudar a la toma de decisiones a los gestores del agua con el fin de reducir las roturas de tubería en las redes de distribución de agua. La primera parte de la metodología que se propone en esta Tesis trata de encontrar los indicadores de presión que influyen más en la probabilidad de roturas de tuberías. Para conocer si un indicador es influyente en la probabilidad de las roturas se comparan las estimaciones de las funciones de distribución acumulada (FDAs) de los indicadores de presiones, considerando dos situaciones: cuando se condicionan a la ocurrencia de una rotura (suceso raro) y cuando se calculan en la situación normal de operación (normal operación). Por lo general, las compañías gestoras cuentan con registros de roturas de los años más recientes y al encontrarse las tuberías enterradas se complica el acceso a la información. Por ello, se propone el uso de funciones de probabilidad que permiten reducir la incertidumbre asociada a los datos registrados. De esta forma, se determinan las funciones de distribución acumuladas (FDAs) de los valores del indicador de la serie de presión (situación normal de operación) y las FDAs de los valores del indicador en el momento de ocurrencia de las roturas (condicionado a las roturas). Si las funciones de distribución provienen de la misma población, no se puede deducir que el indicador claramente influya en la probabilidad de roturas. Sin embargo, si se prueba estadísticamente que las funciones proceden de la misma población, se puede concluir que existe una relación entre el indicador analizado y la ocurrencia de las roturas. Debido a que el número de valores del indicador de la FDA condicionada a las roturas es mucho menor que el número de valores del indicador de la FDA incondicional a las roturas, se generan series aleatorias a partir de los valores de los indicadores con el mismo número de valores que roturas registradas hay. De esta forma, se comparan las FDAs de series aleatorias del indicador con la FDA condicionada a las roturas del mismo indicador y se deduce si el indicador es influyente en la probabilidad de las roturas. Los indicadores de presión pueden depender de unos parámetros. A través de un análisis de sensibilidad y aplicando un test estadístico robusto se determina la situación en la que estos parámetros dan lugar a que el indicador sea más influyente en la probabilidad de las roturas. Al mismo tiempo, los indicadores se pueden calcular en función de dos parámetros de cálculo que se denominan el tiempo de anticipación y el ancho de ventana. El tiempo de anticipación es el tiempo (en horas) entre el final del periodo de computación del indicador de presión y la rotura, y el ancho de ventana es el número de valores de presión que se requieren para calcular el indicador de presión y que es múltiplo de 24 horas debido al comportamiento cíclico diario de la presión. Un análisis de sensibilidad de los parámetros de cálculo explica cuándo los indicadores de presión influyen más en la probabilidad de roturas. En la segunda parte de la metodología se presenta un modelo de diagnóstico bayesiano. Este tipo de modelo forma parte de los modelos estadísticos de prevención de roturas, parten de los datos registrados para establecer patrones de fallo y utilizan el teorema de Bayes para determinar la probabilidad de fallo cuando se condiciona la red a unas determinadas características. Así, a través del teorema de Bayes se comparan la FDA genérica del indicador con la FDA condicionada a las roturas y se determina cuándo la probabilidad de roturas aumenta para ciertos rangos del indicador que se ha inferido como influyente en las roturas. Se determina un ratio de probabilidad (RP) que cuando es superior a la unidad permite distinguir cuándo la probabilidad de roturas incrementa para determinados intervalos del indicador. La primera parte de la metodología se aplica a la red de distribución de la Comunidad de Madrid (España) y a la red de distribución de Ciudad de Panamá (Panamá). Tras el filtrado de datos se deduce que se puede aplicar la metodología en 15 sectores en la Comunidad de Madrid y en dos sectores, llamados corregimientos, en Ciudad de Panamá. Los resultados demuestran que en las dos redes los indicadores más influyentes en la probabilidad de las roturas son el rango de la presión, que supone la diferencia entre la presión máxima y la presión mínima, y la variabilidad de la presión, que considera la propiedad estadística de la desviación típica. Se trata, por tanto, de indicadores que hacen referencia a la dispersión de los datos, a la persistencia de la variación de la presión y que se puede asimilar en resistencia de materiales a la fatiga. La segunda parte de la metodología se ha aplicado a los indicadores influyentes en la probabilidad de las roturas de la Comunidad de Madrid y se ha deducido que la probabilidad de roturas aumenta para valores extremos del indicador del rango de la presión y del indicador de la variabilidad de la presión. Finalmente, se recomienda una gestión de presiones que limite los intervalos de los indicadores influyentes en la probabilidad de roturas que incrementen dicha probabilidad. La metodología propuesta puede aplicarse a otras redes de distribución y puede ayudar a las compañías gestoras a reducir el número de fallos en el sistema a través de la gestión de presiones. This Thesis presents a methodology for the statistical analysis of pipe breaks in water distribution networks. The methodology studies the relationship between pipe breaks and water pressure, and proposes a pressure management procedure to reduce the number of breaks that occur in such networks. One of the manifestations of the deterioration of water supply systems is frequent pipe breaks. System failures are one of the major challenges faced by water utilities, due to their associated social, economic and environmental costs. For all these reasons, water utilities aim at reducing the problem of break occurrence to as great an extent as possible. Water distribution networks can be divided into areas or sectors, which facilitates the control of the network. These areas may be independent or isolated by valves, as it usually happens in developing countries. Alternatively, they can be hydraulically interconnected. The implementation of pressure management strategies is usually carried out through pressure-reducing valves (PRV). These valves are installed at the head of the sectors and, although the inflow may vary significantly, they control the downstream pressure. The most popular methods of pressure management consist of pressure reduction, which is the common form of control, pressure sustaining, prevention and/or alleviation of pressure surges or large variations in pressure, and level/altitude control. From 2005 onwards, the effects of pressure management on burst frequencies have become more widely recognized in the technical literature. This thesis suggests a pressure management that controls the pressure indicator ranges most influential on the probability of pipe breaks. Operating pressure in a sector is characterized by means of a pressure indicator at the head of the DMA, as head losses are relatively small and topographical differences were accounted for at the design stage. The pressure indicator, which may be defined as the calculated statistic from the time series of pressure head over a specific time window, may provide necessary information to help water utilities to make decisions to reduce pipe breaks in water distribution networks. The first part of the methodology presented in this Thesis provides the pressure indicators which have the greatest impact on the probability of pipe breaks to be determined. In order to know whether a pressure indicator influences the probability of pipe breaks, the proposed methodology compares estimates of cumulative distribution functions (CDFs) of a pressure indicator through consideration of two situations: when they are conditioned to the occurrence of a pipe break (a rare event), and when they are not (a normal operation). Water utilities usually have a history of failures limited to recent periods of time, and it is difficult to have access to precise information in an underground network. Therefore, the use of distribution functions to address such imprecision of recorded data is proposed. Cumulative distribution functions (CDFs) derived from the time series of pressure indicators (normal operation) and CDFs of indicator values at times coincident with a reported pipe break (conditioned to breaks) are compared. If all estimated CDFs are drawn from the same population, there is no reason to infer that the studied indicator clearly influences the probability of the rare event. However, when it is statistically proven that the estimated CDFs do not come from the same population, the analysed indicator may have an influence on the occurrence of pipe breaks. Due to the fact that the number of indicator values used to estimate the CDF conditioned to breaks is much lower in comparison with the number of indicator values to estimate the CDF of the unconditional pressure series, and that the obtained results depend on the size of the compared samples, CDFs from random sets of the same size sampled from the unconditional indicator values are estimated. Therefore, the comparison between the estimated CDFs of random sets of the indicator and the estimated CDF conditioned to breaks allows knowledge of if the indicator is influential on the probability of pipe breaks. Pressure indicators depend on various parameters. Sensitivity analysis and a robust statistical test allow determining the indicator for which these parameters result most influential on the probability of pipe breaks. At the same time, indicators can be calculated according to two model parameters, named as the anticipation time and the window width. The anticipation time refers to the time (hours) between the end of the period for the computation of the pressure indicator and the break. The window width is the number of instantaneous pressure values required to calculate the pressure indicator and is multiple of 24 hours, as water pressure has a cyclical behaviour which lasts one day. A sensitivity analysis of the model parameters explains when the pressure indicator is more influential on the probability of pipe breaks. The second part of the methodology presents a Bayesian diagnostic model. This kind of model belongs to the class of statistical predictive models, which are based on historical data, represent break behavior and patterns in water mains, and use the Bayes’ theorem to condition the probability of failure to specific system characteristics. The Bayes’ theorem allows comparing the break-conditioned FDA and the unconditional FDA of the indicators and determining when the probability of pipe breaks increases for certain pressure indicator ranges. A defined probability ratio provides a measure to establish whether the probability of breaks increases for certain ranges of the pressure indicator. The first part of the methodology is applied to the water distribution network of Madrid (Spain) and to the water distribution network of Panama City (Panama). The data filtering method suggests that the methodology can be applied to 15 sectors in Madrid and to two areas in Panama City. The results show that, in both systems, the most influential indicators on the probability of pipe breaks are the pressure range, which is the difference between the maximum pressure and the minimum pressure, and pressure variability, referred to the statistical property of the standard deviation. Therefore, they represent the dispersion of the data, the persistence of the variation in pressure and may be related to the fatigue in material resistance. The second part of the methodology has been applied to the influential indicators on the probability of pipe breaks in the water distribution network of Madrid. The main conclusion is that the probability of pipe breaks increases for the extreme values of the pressure range indicator and of the pressure variability indicator. Finally, a pressure management which limits the ranges of the pressure indicators influential on the probability of pipe breaks that increase such probability is recommended. The methodology presented here is general, may be applied to other water distribution networks, and could help water utilities reduce the number of system failures through pressure management.
Resumo:
A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to �nd those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that, as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter
Resumo:
Este trabajo tiene por objeto aplicar los principios del Value Investing a veinticuatro empresas del sector minero y definir las claves para extrapolar, en base a un análisis fundamental, una calificación para cada una de las empresas. Con este fin, se ha realizado un estudio estadístico multivariante para comparar las correlaciones existentes entre cada ratio fundamental y su evolución en bolsa a uno, tres y cinco años vista. Para procesar los datos se han utilizado los programas MATLAB y EXCEL. Sobre ellos se ha planteado una Matriz de Correlaciones de Pearson y un estudio de dispersión por cruce de pares. El análisis demostró que es posible aplicar la metodología del Value Investing a empresas del sector minero con resultados positivos aunque, el ajuste de las correlaciones, sugiere utilizar series temporales más largas y un mayor número de empresas para ganar fiabilidad en el contraste de estas hipótesis. De los estudios realizados, se deduce que unos buenos fundamentales influyen, de manera notable, a la revalorización bursátil a 3 y 5 años destacando, además, que el ajuste es mejor cuanto mayor sea este tiempo. Abstract This study aims to apply the principles of Value Investing to twenty four mining companies and, based on this fundamental study, develop a rating to classify those companies. For this purpose, we have performed a multivariate statistical study to compare the correlations between each fundamental ratio and its stock revalorization for one, three and five years. MATLAB and EXCEL have been used to process data. The statistical methods used are Pearson Matrix of Correlations and a Cross Pairs Scattering Study. The analysis showed that it is possible to apply the methodology of Value Investing to mining companies, although, the adjustment of correlations suggests using longer time series and a larger amount of companies to test these hypothesis. From the studies performed, it follows that good fundamentals significantly influence the stock market value at 3 and 5 years, noting that, the larger the period under study, the better the fit.
Resumo:
Recent major advances in x-ray imaging and spectroscopy of clusters have allowed the determination of their mass and mass profile out to ≈1/2 the virial radius. In rich clusters, most of the baryonic mass is in the gas phase, and the ratio of mass in gas/stars varies by a factor of 2–4. The baryonic fractions vary by a factor of ≈3 from cluster to cluster and almost always exceed 0.09 h50−[3/2] and thus are in fundamental conflict with the assumption of Ω = 1 and the results of big bang nucleosynthesis. The derived Fe abundances are 0.2–0.45 solar, and the abundances of O and Si for low redshift systems are 0.6–1.0 solar. This distribution is consistent with an origin in pure type II supernova. The amount of light and energy produced by these supernovae is very large, indicating their importance in influencing the formation of clusters and galaxies. The lack of evolution of Fe to a redshift of z ≈ 0.4 argues for very early enrichment of the cluster gas. Groups show a wide range of abundances, 0.1–0.5 solar. The results of an x-ray survey indicate that the contribution of groups to the mass density of the universe is likely to be larger than 0.1 h50−2. Many of the very poor groups have large x-ray halos and are filled with small galaxies whose velocity dispersion is a good match to the x-ray temperatures.
Resumo:
The base following stop codons in mammalian genes is strongly biased, suggesting that it might be important for the termination event. This proposal has been tested experimentally both in vivo by using the human type I iodothyronine deiodinase mRNA and the recoding event at the internal UGA codon and in vitro by measuring the ability of each of the 12 possible 4-base stop signals to direct the eukaryotic polypeptide release factor to release a model peptide, formylmethionine, from the ribosome. The internal UGA in the deiodinase mRNA is used as a codon for incorporation of selenocysteine into the protein. Changing the base following this UGA codon affected the ratio of termination to selenocysteine incorporation in vivo at this codon: 1:3 (C or U) and 3:1 (A or G). These UGAN sequences have the same order of efficiency of termination as was found with the in vitro termination assay (4th base: A approximately G >> C approximately U). The efficiency of in vitro termination varied in the same manner over a 70-fold range for the UAAN series and over an 8-fold range for the UGAN and UAGN series. There is a correlation between the strength of the signals and how frequently they occur at natural termination sites. Together these data suggest that the base following the stop codon influences translational termination efficiency as part of a larger termination signal in the expression of mammalian genes.
Resumo:
A sociedade está cada vez mais exigente com relação à qualidade dos produtos consumidos e se preocupa com os benefícios para a saúde. Neste contexto, objetivou-se avaliar o efeito da inclusão de níveis de óleo de canola na dieta de vacas sobre amanteiga e muçarela, buscando produtos mais saudáveis para o consumo humano. Foram utilizadas 18 vacas Holandesas, em estágio intermediário de lactação, com produção média de 22 (± 4) Kg de leite/ dia, as quais foram distribuídas em dois quadrados latinos 3x3 contemporâneos e receberam as dietas experimentais: T1- Controle (0% de inclusão de óleo); T2- 3% de inclusão de óleo de canola e T3- 6% de inclusão de óleo de canola. O perfil lipídico foi determinado através de cromatografia gasosa, além da avaliação de qualidade nutricional, realizada através de equações utilizando os ácidos graxos obtidos no perfil lipídico, análises físico-químicas determinadas pela metodologia do Instituto Adolfo Lutz e análises microbiológicas. Houveram problemas durante processamento do leite, gerando alterações de tecnologia de fabricação do produto manteiga, obtendo-se outro produto, o creme de leite, ao invés de manteiga, além de prejuízos na qualidade microbiológicas do creme de leite e muçarela. A inclusão de óleo de canola na dieta em lactação reduziu quadraticamente os ácidos graxos de cadeia curta e proporcionou aumento quadrático dos ácidos graxos de cadeia longa, dos ácidos graxos insaturados e ácidos graxos monoinsaturados na muçarela. A relação ácidos graxos saturados/ ácidos graxos insaturados (AGS/ AGI) e a relação ômega-6/ômega-3, assim como os índices de aterogenicidade e trombogenicidade, na muçarela, reduziram linearmente 25,68%, 31,35%; 32,12% e 21,78%, respectivamente, quando comparando T1 e T3. No creme de leite, houve redução linear dos ácidos graxos de cadeia curta e média, bem como, os ácidos graxos saturados e a relação ácidos graxos saturados/ ácidos graxos insaturados (AGS/ AGI) em 41,07%; 23,82%; 15,91% e 35,59%, respectivamente, enquanto os ácidos graxos de cadeia longa, ácidos graxos insaturados e ácidos graxos monoinsaturados aumentaram linearmente 41,40%; 28,24% e 32,07%, nesta ordem, quando comparando T1 com T3. Os índices de aterogenicidade e trombogenicidade reduziram de forma linear, enquanto o índice h/H (razão ácidos graxos hipocolesterolêmicos e hipercolesterolêmicos) aumentou linearmente. A composição físico-química de ambos derivados e o rendimento da muçarela não apresentaram efeito significativo com a inclusão do óleo de canola, exceto a proteína bruta da muçarela que apresentou aumento linear e a gordura do creme de leite que apresentou efeito quadrático. As análises microbiológicas mostram contagens muito elevadas de microrganismos, sugerindo que os produtos não apresentam qualidade microbiológica, decorrente da ausência do processo de pasteurização do creme e da baixa eficiência do tratamento térmico aplicado ao leite destinado a produção da muçarela. Conclui-se que a adição de óleo de canola na dieta de vacas lactantes proporciona muçarela e creme de leite mais saudáveis para o consumo humano, pois apresentaram perfil lipídico mais rico em ácidos graxos insaturados, além da série ômega-3 e ácido oleico, entretanto, devido a problemas de processamento, estes produtos obtidos, não estão aptos ao consumo devido à ausência de qualidade microbiológica.
Resumo:
Aims. We present a detailed study of the two Sun-like stars KIC 7985370 and KIC 7765135, to determine their activity level, spot distribution, and differential rotation. Both stars were previously discovered by us to be young stars and were observed by the NASA Kepler mission. Methods. The fundamental stellar parameters (vsini, spectral type, T_eff, log g, and [Fe/H]) were derived from optical spectroscopy by comparison with both standard-star and synthetic spectra. The spectra of the targets allowed us to study the chromospheric activity based on the emission in the core of hydrogen Hα and Ca ii infrared triplet (IRT) lines, which was revealed by the subtraction of inactive templates. The high-precision Kepler photometric data spanning over 229 days were then fitted with a robust spot model. Model selection and parameter estimation were performed in a Bayesian manner, using a Markov chain Monte Carlo method. Results. We find that both stars are Sun-like (of G1.5 V spectral type) and have an age of about 100–200 Myr, based on their lithium content and kinematics. Their youth is confirmed by their high level of chromospheric activity, which is comparable to that displayed by the early G-type stars in the Pleiades cluster. The Balmer decrement and flux ratio of their Ca ii-IRT lines suggest that the formation of the core of these lines occurs mainly in optically thick regions that are analogous to solar plages. The spot model applied to the Kepler photometry requires at least seven persistent spots in the case of KIC 7985370 and nine spots in the case of KIC 7765135 to provide a satisfactory fit to the data. The assumption of the longevity of the star spots, whose area is allowed to evolve with time, is at the heart of our spot-modelling approach. On both stars, the surface differential rotation is Sun-like, with the high-latitude spots rotating slower than the low-latitude ones. We found, for both stars, a rather high value of the equator-to-pole differential rotation (dΩ ≈ 0.18 rad d^-1), which disagrees with the predictions of some mean-field models of differential rotation for rapidly rotating stars. Our results agree instead with previous works on solar-type stars and other models that predict a higher latitudinal shear, increasing with equatorial angular velocity, that can vary during the magnetic cycle.
Resumo:
This paper examines the sources of real exchange rate (RER) volatility in eighty countries around the world, during the period 1970 to 2011. Our main goal is to explore the role of nominal exchange rate regimes and financial crises in explaining the RER volatility. To that end, we employ two complementary procedures that consist in detecting structural breaks in the RER series and decomposing volatility into its permanent and transitory components. The results confirm that exchange rate volatility does increase with the global financial crises and detect the existence of an inverse relationship between the degree of flexibility in the exchange rate regime and RER volatility using a de facto exchange rate classification.
Resumo:
We present some of the first science data with the new Keck/MOSFIRE instrument to test the effectiveness of different AGN/SF diagnostics at z ~ 1.5. MOSFIRE spectra were obtained in three H-band multi-slit masks in the GOODS-S field, resulting in 2 hr exposures of 36 emission-line galaxies. We compare X-ray data with the traditional emission-line ratio diagnostics and the alternative mass-excitation and color-excitation diagrams, combining new MOSFIRE infrared data with previous HST/WFC3 infrared spectra (from the 3D-HST survey) and multiwavelength photometry. We demonstrate that a high [O III]/Hβ ratio is insufficient as an active galactic nucleus (AGN) indicator at z > 1. For the four X-ray-detected galaxies, the classic diagnostics ([O III]/Hβ versus [N II]/Hα and [S II]/Hα) remain consistent with X-ray AGN/SF classification. The X-ray data also suggest that "composite" galaxies (with intermediate AGN/SF classification) host bona fide AGNs. Nearly ~2/3 of the z ~ 1.5 emission-line galaxies have nuclear activity detected by either X-rays or the classic diagnostics. Compared to the X-ray and line ratio classifications, the mass-excitation method remains effective at z > 1, but we show that the color-excitation method requires a new calibration to successfully identify AGNs at these redshifts.