991 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA: CIÊNCIAS CLIMÁTICAS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Debris discs are commonly detected orbiting main-sequence stars, but little is known regarding their fate as stars evolve along subgiant and giant stages. Jones (2008) has found strong evidence on the presence of mid-IR excess in G and K stars of luminosity class III, using photometric data from the Two-Micron All-Sky Survey (2MASS) and GLIMPSE catalogues. While the origin of these excesses remains uncertain, it is plausible that they arise from debris discs around these stars. The present study brings an unprecedent survey in the search for mid-IR excess among single and binary F, G and K-type evolved stars of luminosity classes IV, III, II and Ib. For this study, we use WISE and 2MASS photometric data for a sample of 3000 evolved stars, complete up to visual magnitude of 6.5. As major results, we found that the frequency of evolved stars showing mid-IR WISE excess increases from the luminosity classes IV and III to luminosity classes II and Ib. In addition, there is no clear difference between the presence of IR excess in binary and single stars for all the analyzed luminosity classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Debris discs are commonly detected orbiting main-sequence stars, but little is known regarding their fate as stars evolve along subgiant and giant stages. Jones (2008) has found strong evidence on the presence of mid-IR excess in G and K stars of luminosity class III, using photometric data from the Two-Micron All-Sky Survey (2MASS) and GLIMPSE catalogues. While the origin of these excesses remains uncertain, it is plausible that they arise from debris discs around these stars. The present study brings an unprecedent survey in the search for mid-IR excess among single and binary F, G and K-type evolved stars of luminosity classes IV, III, II and Ib. For this study, we use WISE and 2MASS photometric data for a sample of 3000 evolved stars, complete up to visual magnitude of 6.5. As major results, we found that the frequency of evolved stars showing mid-IR WISE excess increases from the luminosity classes IV and III to luminosity classes II and Ib. In addition, there is no clear difference between the presence of IR excess in binary and single stars for all the analyzed luminosity classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Survival models deals with the modelling of time to event data. In certain situations, a share of the population can no longer be subjected to the event occurrence. In this context, the cure fraction models emerged. Among the models that incorporate a fraction of cured one of the most known is the promotion time model. In the present study we discuss hypothesis testing in the promotion time model with Weibull distribution for the failure times of susceptible individuals. Hypothesis testing in this model may be performed based on likelihood ratio, gradient, score or Wald statistics. The critical values are obtained from asymptotic approximations, which may result in size distortions in nite sample sizes. This study proposes bootstrap corrections to the aforementioned tests and Bartlett bootstrap to the likelihood ratio statistic in Weibull promotion time model. Using Monte Carlo simulations we compared the nite sample performances of the proposed corrections in contrast with the usual tests. The numerical evidence favors the proposed corrected tests. At the end of the work an empirical application is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research work aims to make a study of the algebraic theory of matrix monic polynomials, as well as the definitions, concepts and properties with respect to block eigenvalues, block eigenvectors and solvents of P(X). We investigte the main relations between the matrix polynomial and the Companion and Vandermonde matrices. We study the construction of matrix polynomials with certain solvents and the extention of the Power Method, to calculate block eigenvalues and solvents of P(X). Through the relationship between the dominant block eigenvalue of the Companion matrix and the dominant solvent of P(X) it is possible to obtain the convergence of the algorithm for the dominant solvent of the matrix polynomial. We illustrate with numerical examples for diferent cases of convergence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentee (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentee (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this investigation is to propose the notion of uniform and strong primeness in fuzzy environment. First, it is proposed and investigated the concept of fuzzy strongly prime and fuzzy uniformly strongly prime ideal. As an additional tool, the concept of t/m systems for fuzzy environment gives an alternative way to deal with primeness in fuzzy. Second, a fuzzy version of correspondence theorem and the radical of a fuzzy ideal are proposed. Finally, it is proposed a new concept of prime ideal for Quantales which enable us to deal with primeness in a noncommutative setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this investigation is to propose the notion of uniform and strong primeness in fuzzy environment. First, it is proposed and investigated the concept of fuzzy strongly prime and fuzzy uniformly strongly prime ideal. As an additional tool, the concept of t/m systems for fuzzy environment gives an alternative way to deal with primeness in fuzzy. Second, a fuzzy version of correspondence theorem and the radical of a fuzzy ideal are proposed. Finally, it is proposed a new concept of prime ideal for Quantales which enable us to deal with primeness in a noncommutative setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Were synthesized in this work in the following aqueous solution coordination compounds: [Ni(LDP)(H2O)2Cl2].2H2O, [Co(LDP)Cl2].3H2O, [Ni(CDP)Cl2].4H2O, [Co(CDP)Cl2].4H2O, [Ni(BDZ)2Cl2].4H2O and [Co(BDZ)2Cl2(H2O)2]. These complexes were synthesized by stoichiometric addition of the binder in the respective metal chloride solutions. Precipitation occurred after drying the solvent at room temperature. The characterization and proposed structures were made using conventional analysis methods such as elemental analysis (CHN), absorption spectroscopy in the infrared Fourier transform spectroscopy (FTIR), X-ray diffraction by the powder method and Technical thermoanalytical TG / DTG (thermogravimetry / derivative thermogravimetry) and DSC (differential scanning calorimetry). These techniques provided information on dehydration, coordination modes, thermal performance, composition and structure of the synthesized compounds. The results of the TG curve, it was possible to establish the general formula of each compound synthesized. The analysis of X-ray diffraction was observed that four of the synthesized complex crystal structure which does not exhibit the complex was obtained from Ldopa and carbidopa and the complex obtained from benzimidazole was obtained crystal structures. The observations of the spectra in the infrared region suggested a monodentate ligand coordination to metal centers through its amine group for all complexes. The TG-DTG and DSC curves provide important information and on the behavior and thermal decomposition of the synthesized compounds. The molar conductivity data indicated that the solutions of the complexes formed behave as a nonelectrolyte, which implies that chlorine is coordinated to the central atom in the complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Were synthesized in this work in the following aqueous solution coordination compounds: [Ni(LDP)(H2O)2Cl2].2H2O, [Co(LDP)Cl2].3H2O, [Ni(CDP)Cl2].4H2O, [Co(CDP)Cl2].4H2O, [Ni(BDZ)2Cl2].4H2O and [Co(BDZ)2Cl2(H2O)2]. These complexes were synthesized by stoichiometric addition of the binder in the respective metal chloride solutions. Precipitation occurred after drying the solvent at room temperature. The characterization and proposed structures were made using conventional analysis methods such as elemental analysis (CHN), absorption spectroscopy in the infrared Fourier transform spectroscopy (FTIR), X-ray diffraction by the powder method and Technical thermoanalytical TG / DTG (thermogravimetry / derivative thermogravimetry) and DSC (differential scanning calorimetry). These techniques provided information on dehydration, coordination modes, thermal performance, composition and structure of the synthesized compounds. The results of the TG curve, it was possible to establish the general formula of each compound synthesized. The analysis of X-ray diffraction was observed that four of the synthesized complex crystal structure which does not exhibit the complex was obtained from Ldopa and carbidopa and the complex obtained from benzimidazole was obtained crystal structures. The observations of the spectra in the infrared region suggested a monodentate ligand coordination to metal centers through its amine group for all complexes. The TG-DTG and DSC curves provide important information and on the behavior and thermal decomposition of the synthesized compounds. The molar conductivity data indicated that the solutions of the complexes formed behave as a nonelectrolyte, which implies that chlorine is coordinated to the central atom in the complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lubricants and cutting middle distillates typically have large amounts of n-paraffins to increase its freezing point and fluidity. Accordingly, the removal of n-paraffins of long chain lubricants oils and diesel is essential to get a product with good cold flow properties. The development of new catalysts, which exhibit thermal stability and catalytic activity for the hydroisomerization reaction is still a challenge. Thus, silicoaluminophosphates (SAPO) were synthesized by different routes. Have been used also post-synthesis treatment for obtaining hybrid structures and others synthesis have been carried out with mesoporous template (soft and hard-template). Therefore, SAPO have been impregnated with H2PtCl6 solution by the incipient wetness method. Then assessments of catalytic activities in hydroisomerization and hydrocracking reactions of hexadecane have been held. Besides SAPO, niobium phosphate - NbP - were also impregnated with platinum and evaluated in the same reaction. After impregnation, these catalysts have been characterized by X-ray diffraction (XRD), nitrogen adsorption, infrared spectroscopy with adsorbed pyridine (IV-PY), scanning electron microscopy (SEM) and resonance nuclear magnetic 29Si (29Si-NMR). The characterization results by XRD have shown that it has been possible to obtain mesoporous SAPOs. However, for the syntheses with soft template there was collapse of the structure after removal of the organic template. Even so, these catalysts have been actives. It was possible to obtain hybrid materials through the synthesis of SAPO-11 made with hard templates and by means of post-synthesis treatments samples of SAPO-11. Moreover, NbP has shown characteristic XRD of amorphous materials, with high acidity and were active in the conversion of hexadecane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lubricants and cutting middle distillates typically have large amounts of n-paraffins to increase its freezing point and fluidity. Accordingly, the removal of n-paraffins of long chain lubricants oils and diesel is essential to get a product with good cold flow properties. The development of new catalysts, which exhibit thermal stability and catalytic activity for the hydroisomerization reaction is still a challenge. Thus, silicoaluminophosphates (SAPO) were synthesized by different routes. Have been used also post-synthesis treatment for obtaining hybrid structures and others synthesis have been carried out with mesoporous template (soft and hard-template). Therefore, SAPO have been impregnated with H2PtCl6 solution by the incipient wetness method. Then assessments of catalytic activities in hydroisomerization and hydrocracking reactions of hexadecane have been held. Besides SAPO, niobium phosphate - NbP - were also impregnated with platinum and evaluated in the same reaction. After impregnation, these catalysts have been characterized by X-ray diffraction (XRD), nitrogen adsorption, infrared spectroscopy with adsorbed pyridine (IV-PY), scanning electron microscopy (SEM) and resonance nuclear magnetic 29Si (29Si-NMR). The characterization results by XRD have shown that it has been possible to obtain mesoporous SAPOs. However, for the syntheses with soft template there was collapse of the structure after removal of the organic template. Even so, these catalysts have been actives. It was possible to obtain hybrid materials through the synthesis of SAPO-11 made with hard templates and by means of post-synthesis treatments samples of SAPO-11. Moreover, NbP has shown characteristic XRD of amorphous materials, with high acidity and were active in the conversion of hexadecane.