67 resultados para GALAXIES: FUNDAMENTAL PARAMETERS
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
An alternative approach to the fundamental general physics concepts has been proposed. We demonstrate that the electrostatic potential energy of a discrete or a continuous system of charges should be stored by the charges and not the field. It is found that there is a possibility that any electric field has no energy density, as well as magnetic field. It is found that there is no direct relation between the electric or magnetic energy and photons. An alternative derivation of the blackbody radiation formula is proposed. It is also found that the zero-point of energy of electromagnetic radiation may not exist.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Characterization of intonation in Karṇāṭaka music by parametrizing context-based Svara Distributions
Resumo:
Intonation is a fundamental music concept that has a special relevance in Indian art music. It is characteristic of the rāga and intrinsic to the musical expression of the performer. Describing intonation is of importance to several information retrieval tasks like the development of rāga and artist similarity measures. In our previous work, we proposed a compact representation of intonation based on the parametrization of the pitch histogram of a performance and demonstrated the usefulness of this representation through an explorative rāga recognition task in which we classified 42 vocal performances belonging to 3 rāgas using parameters of a single svara. In this paper, we extend this representation to employ context-based svara distributions, which are obtained with a different approach to find the pitches belonging to each svara. We quantitatively compare this method to our previous one, discuss the advantages, and the necessary melodic analysis to be carried out in future.
Resumo:
A digital game was created as a resource for cognitive learning and afterwards it was used in primary schools in order to survey its active users. The methods used to recollect data were observation, in-depth interviews and focus groups. The main target of this study is to collect points of view of different primary school teachers. Conclusions show us how the group of study members perceive the use of digital games in the classroom.
Resumo:
Actualment l’exigència i la competitivitat del mercat, obliguen les industries a modernitzar-se i automatitzar tots els seus processos productius. En aquests processos les dades i paràmetres de control són dades fonamentals a verificar. Amb aquest treball final de carrera, es pretén realitzar un mòdul d’entrades digitals, per tal de gestionar les dades rebudes d’un procés automatitzat. L’objectiu d’aquest TFC ha estat dissenyar un mòdul d’entrades digitals capaç de gestionar dades de qualsevol tipus de procés automatitzat i transmetre-les a un mestremitjançant un bus de comunicació Modbus. El projecte però, s’ha centrat en el cas específic d’un procés automatitzat per al tractament de la fusta. El desenvolupament d’aquest sistema, comprèn el disseny del circuit, la realització de la placa, el software de lectura de dades i la implementació del protocol Modbus. Tot el mòdul d’entrades està controlat per un microcontrolador PIC 18F4520. El disseny és un sistema multiplataforma per tal d’adaptar-se a qualsevol procés automàtic i algunes de les seves característiques més rellevants són: entrades aïllades multitensió, control de fugues, sortides a relé, i memòria externa de dades, entre altres. Com a conclusions cal dir que s’han assolit els objectius proposats amb èxit. S’ha aconseguit un disseny robust, fiable, polivalent i altament competitiu en el mercat. A nivell acadèmic, s’han ampliat els coneixements en el camp del disseny i de la programació.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
We present a set of photometric data concerning two distant clusters of galaxies: Cl 1613+3104 (z=0.415) and Cl 1600+4109 (z=0.540). The photometric survey extends to a field of about 4' x 3'. It was performed in 3 filters: Johnson B, and Thuan-Gunn g and r. The sample includes 679 objects in the field of Cl 1613+3104 and 334 objects in Cl 1600+4109.
Resumo:
Redshifts for 100 galaxies in 10 clusters of galaxies are presented based on data obtained between March 1984 and March 1985 from Calar Alto, La Palma, and ESO, and on data from Mauna Kea. Data for individual galaxies are given, and the accuracy of the velocities of the four instruments is discussed. Comparison with published data shows the present velocities to be shifted by + 4.0 km/s on average, with a standard deviation in the difference of 89.7 km/s, consistent with the rms of redshift measurements which range from 50-100 km/s.
Resumo:
Spectroscopic and photometric observations in a 6 arcmin x 6 arcmin field centered on the rich cluster of galaxies Abell 2390 are presented. The photometry concerns 700 objects and the spectroscopy 72 objects. The redshift survey shows that the mean redshift of the cluster is 0.232. An original method for automatic determination of the spectral type of galaxies is presented.
Resumo:
uvby H-beta photometry has been obtained for a sample of 93 selected main sequence A stars. The purpose was to determine accurate effective temperatures, surface gravities, and absolute magnitudes for an individual determination of ages and parallaxes, which have to be included in a more extensive work analyzing the kinematic properties of A V stars. Several calibrations and methods to determine the above mentioned parameters have been reviewed, allowing the design of a new algorithm for their determination. The results obtained using this procedure were tested in a previous paper using uvby H-beta data from the Hauck and Mermilliod catalogue, and comparing the rusulting temperatures, surface gravities and absolute magnitudes with empirical determinations of these parameters.
Resumo:
We present new photometric and spectroscopic observations of objects in the field of the cluster of galaxies Abell 2218. The photometric survey, centered on the cluster core, extends to a field of about 4 x 4 arcmin. It was performed in 5 bands (B,g,r,i and z filters). This sample, which includes 729 objects, is about three times larger than the survey made by Butcher and collaborators (Butcher et al., 1983, Butcher and Oemler, 1984) in the same central region of the field. Only 228 objects appear in both catalogues since our survey covers a smaller region. The spectral range covered by our filters is wider and the photometry is much deeper, up to magnitude 27 in r. The spectroscopic survey concerns 66 objects, on a field comparable to that of Butcher and collaborators. From our observations we calculate the mean redshift of the cluster, 0.1756, and its velocity dispersion, 1370 km/s. The spectral types are determined for many galaxies in the sample by comparing their spectra with synthetic ones from Rocca-Volmerange and Guiderdoni (1988).