901 resultados para INITIAL MASS FUNCTION


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Northern HIPASS catalogue (NHICAT) is the northern extension of the HIPASS catalogue, HICAT. This extension adds the sky area between the declination (Dec.) range of +2 degrees < delta < +25 degrees 30' to HICAT's Dec. range of -90 degrees < delta < +2 degrees. HIPASS is a blind H I survey using the Parkes Radio Telescope covering 71 per cent of the sky (including this northern extension) and a heliocentric velocity range of - 1280 to 12 700 km s(-1). The entire Virgo Cluster region has been observed in the Northern HIPASS. The galaxy catalogue, NHICAT, contains 1002 sources with nu(hel) > 300 km s(-1). Sources with -300 < nu(hel) < 300 km s(-1) were excluded to avoid contamination by Galactic emission. In total, the entire HIPASS survey has found 5317 galaxies identified purely by their HI content. The full galaxy catalogue is publicly available at http://hipass.aus-vo.org.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We derive observed H alpha and R-band luminosity densities of an H I-selected sample of nearby galaxies using the SINGG sample to be l'(H alpha) = (9.4 +/- 1.8) x 10(38) h(70) ergs s(-1) Mpc(-3) for H alpha and l'(R) = (4.4 +/- 9.7) x 10(37) h(70) ergs s(-1) angstrom(-1) Mpc(-3) in the R band. This R-band luminosity density is approximately 70% of that found by the Sloan Digital Sky Survey. This leads to a local star formation rate density of log ((rho)over dot(SFR) [M-circle dot yr(-1) Mpc(-3)]) = -1.80(-0.07)(+0.13)(random) +/- 0.03(systematic) + log (h(70)) after applying a mean internal extinction correction of 0.82 mag. The gas cycling time of this sample is found to be t(gas) = 7.5(-2.1)(+1.3) Gyr, and the volume-averaged equivalent width of the SINGG galaxies is EW(H alpha) = 28.8(-4.7)(+7.2) angstrom (21.2-3.5+4.2 angstrom without internal dust correction). As with similar surveys, these results imply that (rho)over dot(SFR)(z) decreases drastically from z similar to 1.5 to the present. A comparison of the dynamical masses of the SINGG galaxies evaluated at their optical limits with their stellar and H I masses shows significant evidence of downsizing: the most massive galaxies have a larger fraction of their mass locked up in stars compared with H I, while the opposite is true for less massive galaxies. We show that the application of the Kennicutt star formation law to a galaxy having the median orbital time at the optical limit of this sample results in a star formation rate decay with cosmic time similar to that given by the. (rho)over dot(SFR)(z) evolution. This implies that the (rho)over dot(SFR)(z) evolution is primarily due to the secular evolution of galaxies, rather than interactions or mergers. This is consistent with the morphologies predominantly seen in the SINGG sample.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to establish some mixture distributions that arise in stochastic processes. Some basic functions associated with the probability mass function of the mixture distributions, such as k-th moments, characteristic function and factorial moments are computed. Further we obtain a three-term recurrence relation for each established mixture distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Павел Т. Стойнов - В тази работа се разглежда отрицателно биномното разпределение, известно още като разпределение на Пойа. Предполагаме, че смесващото разпределение е претеглено гама разпределение. Изведени са вероятностите в някои частни случаи. Дадени са рекурентните формули на Панжер.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 60E05, 62P05.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The population of naive T cells in the periphery is best described by determining both its T cell receptor diversity, or number of clonotypes, and the sizes of its clonal subsets. In this paper, we make use of a previously introduced mathematical model of naive T cell homeostasis, to study the fate and potential of naive T cell clonotypes in the periphery. This is achieved by the introduction of several new stochastic descriptors for a given naive T cell clonotype, such as its maximum clonal size, the time to reach this maximum, the number of proliferation events required to reach this maximum, the rate of contraction of the clonotype during its way to extinction, as well as the time to a given number of proliferation events. Our results show that two fates can be identified for the dynamics of the clonotype: extinction in the short-term if the clonotype experiences too hostile a peripheral environment, or establishment in the periphery in the long-term. In this second case the probability mass function for the maximum clonal size is bimodal, with one mode near one and the other mode far away from it. Our model also indicates that the fate of a recent thymic emigrant (RTE) during its journey in the periphery has a clear stochastic component, where the probability of extinction cannot be neglected, even in a friendly but competitive environment. On the other hand, a greater deterministic behaviour can be expected in the potential size of the clonotype seeded by the RTE in the long-term, once it escapes extinction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Supernova (SN) is an explosion of a star at the end of its lifetime. SNe are classified to two types, namely type I and II through the optical spectra. They have been categorised based on their explosion mechanism, to core collapse supernovae (CCSNe) and thermonuclear supernovae. The CCSNe group which includes types IIP, IIn, IIL, IIb, Ib, and Ic are produced when a massive star with initial mass more than 8 M⊙ explodes due to a collapse of its iron core. On the other hand, thermonuclear SNe originate from white dwarfs (WDs) made of carbon and oxygen, in a binary system. Infrared astronomy covers observations of astronomical objects in infrared radiation. The infrared sky is not completely dark and it is variable. Observations of SNe in the infrared give different information than optical observations. Data reduction is required to correct raw data from for example unusable pixels and sky background. In this project, the NOTCam package in the IRAF was used for the data reduction. For measuring magnitudes of SNe, the aperture photometry method with the Gaia program was used. In this Master’s thesis, near-infrared (NIR) observations of three supernovae of type IIn (namely LSQ13zm, SN 2009ip and SN2011jb), one type IIb (SN2012ey), in addition to one type Ic (SN2012ej) and type IIP (SN 2013gd) are studied with emphasis on luminosity and colour evolution. All observations were done with the Nordic Optical Telescope (NOT). Here, we used the classification by Mattila & Meikle (2001) [76], where the SNe are differentiated by the infrared light curves into two groups, namely ’ordinary’ and ’slowly declining’. The light curves and colour evolution of these supernovae were obtained in J, H and Ks bands. In this study, our data, combined with other observations, provide evidence to categorize LSQ13zm, SN 2012ej and SN 2012ey as being part of the ordinary type. We found interesting NIR behaviour of SN 2011jb, which lead it to be classified as a slowly declining type.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho foi desenvolvido na empresa Amorim & Irmãos, SA e teve dois objectivos fundamentais. O primeiro centrou-se na análise do processo de tratamento de superfície das rolhas, procurando um produto alternativo ao actualmente implementado na empresa e a sua optimização. O segundo objectivo foi a elaboração de um novo método de determinação da absorção em garrafa, que permitisse a sua determinação sem o conhecimento da massa inicial da rolha. Para a concretização do primeiro objectivo foram estudados doze produtos químicos em comparação com o actualmente utilizado, em que o objectivo foi obter-se forças de extracção entre 15 e 20 daN. Após realização do tratamento de superfície para cada produto foram realizados vários testes laboratoriais, nomeadamente: forças de extracção, vedação em tubo, absorção em garrafa, capilaridade, análise de risco à quantidade de produto adicionado e análise de risco ao tempo de distribuição do tratamento. Após análise global dos resultados obtidos verificou-se que o produto T13, embora apresente forças de extracção no limite inferior ao desejado possui uma boa estabilização. Os produtos T5 e T6 são bons alternativos ao produto actualmente implementado (T8), embora seja necessário ter alguns cuidados no seu manuseamento. O produto T5 como foi considerado mau no teste da absorção em garrafa não poderá ser utilizado para mercados mais distantes (EUA, Austrália e África do Sul) devido ao risco de ocorrência de migração de vinho através da rolha de cortiça. O produto T6 como apresentou um comportamento irregular na análise de risco à quantidade de produto adicionado, e na análise de risco à distribuição de produto, deve-se ter muita atenção à quantidade inserida no tambor assim como o tempo de distribuição. Para a concretização do segundo objectivo foi determinada a absorção em garrafa pelo método actual e comparou-se com o novo método. Apesar do desvio padrão ser de aproximadamente 0,85, pode-se afirmar que o novo método de determinação da absorção em garrafa é um método eficaz que pode ser aprovado pela empresa. Desta forma, foi possível solucionar esta questão e permitir ao laboratório de controlo de qualidade determinar a absorção em garrafa de garrafas de vinho provenientes de clientes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Microwave reduction testing using activated charcoal as a reducing agent was performed on a sample of Black Thor chromite ore from the Ring of Fire deposit in Northern Ontario. First, a thermodynamic model was constructed for the system. Activity coefficients for several species were found in the literature. The model predicted chromium grades of 61.60% and recoveries of 93.43% for a 15% carbon addition. Next, reduction testing on the chromite ore was performed. Tests were performed at increasing power levels and reduction times. Testing atmospheres used were air, argon, and vacuum. The reduced product had maximum grades of 72.89% and recoveries of 80.37%. These maximum values were obtained in the same test where an argon atmosphere was used, with a carbon addition of 15%, optimal power level of 1200 W (actual 1171 W), and a time of 400 seconds. During this test, 17.53% of the initial mass was lost as gas, a carbon grade of 1.95% was found for the sintered core product. Additional work is recommended to try and purify the sintered core product as well as reduce more of the initial sample. Changing reagent schemes or a two step reduction / separation process could be implemented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a covariant approach in Minkowski space for the description of quarks and mesons that exhibits both chiral-symmetry breaking and confinement. In a simple model for the interquark interaction, the quark mass function is obtained and used in the calculation of the pion form factor. We study the effects of the mass function and the different quark pole contributions on the pion form factor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The production of activated carbons (ACs) involves two main steps: the carbonization of the carbonaceous of raw materials at temperatures below 1073 K in the absence of oxygen and the activation had realized at the temperature up to 1173 but the most useful temperature at 1073 K. In our study we used the most common industrial and consumer solid waste, namely PET, alone or blended with other synthetic polymer PAN. By mixing the two polymers in different ratios, an improvement of the yield of the AC production was found and some textural properties were enhanced by comparison with the AC prepared using each polymer separately. When all the samples were exposed through the carbonization process with a pyrolysis the mixture of PAN-PET (1:1w/w) yield around 31.9%, between that obtained with PET (16.9%) or PAN (42.6%) separately. The combine activation, with CO2 at 1073 K, allow ACs with a lower burn-off degree isothermally, when compared with those attained with PET or PAN alone, but with similarly chemicals or textural properties. The resultant ACs are microporous in their nature, as the activation time increase, the PET-PAN mixture AC are characterized by a better developed porous structure, when associated with the AC prepared from PAN. The AC prepared from PET-PAN mixture are characterized by basic surface characteristics, with a pHpzc around 10.5, which is an important characteristic for future applications on acidic pollutants removals from liquid or gaseous phase. In this study we had used the FTIR methods to determine the main functional groups in the surface of the activated carbons. The adsorbents prepared from PAN fibres presents an IR spectrum with similar characteristics to those obtained with PET wastes, but with fewer peaks and bands with less intensity, in particular for the PAN-8240 sample. This can be reflected by the stretching and deformation modes of NH bond in the range 3100 – 3300 cm-1 and 1520 – 1650 cm-1, respectively. Also, stretching mode associated to C–N, C=N, can contributed to the profile of IR spectrum around 1170 cm-1, 1585 – 1770 cm-1. And the TGA methods was used to study the loses of the precursors mass according to the excessive of the temperature. The results showed that, there were different decreasing of the mass of each precursors. PAN degradation started at almost 573 K and at 1073 K, PAN preserve more than 40% of the initial mass. PET degradation started at 650 K, but at 1073 K, it has lost 80% of the initial mass. However, the mixture of PET-PAN (1:1w/w) showed a thermogravimetric profile between the two polymers tested individually, with a final mass slightly less than 30%. From a chemical point of view, the carbonisation of PET mainly occurs in one step between 650 and 775 K.