962 resultados para Subsequential Completeness
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
Recently, we have found an additional spin-orbit (SO) interaction in quantum wells with two subbands [Bernardes , Phys. Rev. Lett. 99, 076603 (2007)]. This new SO term is nonzero even in symmetric geometries, as it arises from the intersubband coupling between confined states of distinct parities, and its strength is comparable to that of the ordinary Rashba. Starting from the 8x8 Kane model, here we present a detailed derivation of this new SO Hamiltonian and the corresponding SO coupling. In addition, within the self-consistent Hartree approximation, we calculate the strength of this new SO coupling for realistic symmetric modulation-doped wells with two subbands. We consider gated structures with either a constant areal electron density or a constant chemical potential. In the parameter range studied, both models give similar results. By considering the effects of an external applied bias, which breaks the structural inversion symmetry of the wells, we also calculate the strength of the resulting induced Rashba couplings within each subband. Interestingly, we find that for double wells the Rashba couplings for the first and second subbands interchange signs abruptly across the zero bias, while the intersubband SO coupling exhibits a resonant behavior near this symmetric configuration. For completeness we also determine the strength of the Dresselhaus couplings and find them essentially constant as function of the applied bias.
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
With the advent of multi-fibre spectrographs such as the 'Two-Degree Field' (2dF) instrument at the Angle-Australian Telescope, quasar surveys that are free of any preselection of candidates and any biases this implies have become possible for the first time. The first of these is that which is being undertaken as part of the Fornax Spectroscopic Survey, a survey of the area around the Fornax Cluster of galaxies, and aims to obtain the spectra of all objects in the magnitude range 16.5 < b(j) < 19.7. To date, 3679 objects in the central pi -deg(2) area have been successfully identified from their spectral characteristics. Of these, 71 are found to be quasars, 61 with redshifts 0.3 < z < 2.2 and 10 with redshifts z > 2.2. Using this complete quasar sample, a new determination of quasar number counts is made, enabling an independent check of existing quasars surveys. Cumulative counts per square degree at a magnitude limit of b(j) < 19.5 are found to be 11.5 +/- 2.2 for 0.3 < z < 2.2, 2.22 +/- 0.93 for z > 2.2 and 13.7 +/- 3.1 for z > 0.3. Given the likely detection of extra quasars in the Fornax survey, we make a more detailed examination of existing quasar selection techniques. First, looking at the use of a stellar criterion, four of the 71 quasars are 'non-stellar' on the basis of the automated plate measuring facility (APM) b(j) classification, however inspection shows all are consistent with stellar, but misclassified due to image confusion. Examining the ultraviolet excess and multicolour selection techniques, for the selection criteria investigated, ultraviolet excess would find 69 +/- 6 per cent of our 0.3 < z < 2.2 quasars and only 50(-18)(+14), per cent of our z > 2.2 quasars, while the completeness level for multicolour selection is found to be 90(-4)(+3) per cent for 0.3 < z < 2.2 quasars and 80(-12)(+14) per cent for z > 2.2 quasars. The extra quasars detected by our all-object survey thus have unusually red star-like colours, and this appears to be a result of the continuum shape rather than any emission features. An intrinsic dust extinction model may, at least partly, account for the red colours.
The Las Campanas/AAT rich cluster survey - I. Precision and reliability of the photometric catalogue
Resumo:
The Las Campanas Observatory and Anglo-Australian Telescope Rich Cluster Survey (LARCS) is a panoramic imaging and spectroscopic survey of an X-ray luminosity-selected sample of 21 clusters of galaxies at 0.07 < z < 0.16. Charge-coupled device (CCD) imaging was obtained in B and R of typically 2 degrees wide regions centred on the 21 clusters, and the galaxy sample selected from the imaging is being used for an on-going spectroscopic survey of the clusters with the 2dF spectrograph on the Anglo-Australian Telescope. This paper presents the reduction of the imaging data and the photometric analysis used in the survey. Based on an overlapping area of 12.3 deg(2) we compare the CCD-based LARCS catalogue with the photographic-based galaxy catalogue used for the input to the 2dF Galaxy Redshift Survey (2dFGRS) from the APM, to the completeness of the GRS/APM catalogue, b(J) = 19.45. This comparison confirms the reliability of the photometry across our mosaics and between the clusters in our survey. This comparison also provides useful information concerning the properties of the GRS/APM. The stellar contamination in the GRS/APM galaxy catalogue is confirmed as around 5-10 per cent, as originally estimated. However, using the superior sensitivity and spatial resolution in the LARCS survey evidence is found for four distinct populations of galaxies that are systematically omitted from the GRS/APM catalogue. The characteristics of the 'missing' galaxy populations are described, reasons for their absence examined and the impact they will have on the conclusions drawn from the 2dF Galaxy Redshift Survey are discussed.
Resumo:
Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.
Resumo:
The Edinburgh-Cape Blue Object Survey is a major survey to discover blue stellar objects brighter than B similar to 18 in the southern sky. It is planned to cover an area of sky of 10 000 deg(2) with \b\ > 30 degrees and delta < 0 degrees. The blue stellar objects are selected by automatic techniques from U and B pairs of UK Schmidt Telescope plates scanned with the COSMOS measuring machine. Follow-up photometry and spectroscopy are being obtained with the SAAO telescopes to classify objects brighter than B = 16.5. This paper describes the survey, the techniques used to extract the blue stellar objects, the photometric methods and accuracy, the spectroscopic classification, and the limits and completeness of the survey.
Resumo:
Diagnosis involves a complex and overlapping series of steps, each of which may be a source of error and of variability between clinicians. This variation may involve the ability to elicit relevant information from the client or animal, in the accuracy, objectivity and completeness of relevant memory stores, and in psychological attributes including tolerance for uncertainty and willingness to engage in constructive self-criticism. The diagnostic acumen of an individual clinician may not be constant, varying with external and personal factors, with different clients and cases, and with the use made of tests. In relation to clients, variations may occur in the ability to gain their confidence, to ask appropriate questions and to evaluate accurately both verbal and nonverbal responses. Tests may introduce problems of accuracy, validity, sensitivity, specificity, interpretation and general appropriateness for the case. Continuing effectiveness as a diagnostician therefore requires constant attention to the maintenance of adequate and up-to-date skills and knowledge relating to the animals and their diseases and to tests, and of sensitive interpersonal skills.
Resumo:
Loblolly pine ( Pinus taeda L.) seeds from sources with a mild climate under maritime influence (North Carolina) required shorter moist chilling to achieve maximum germination vigor than seeds from sources with a harsher continental climate (Oklahoma). Solid matrix priming (SMP) for 6 d achieved as much as 60 d of moist chilling to improve rapidity, synchrony and completeness of germination for three of the four families studied. SMP after moist chilling increased the rapidity, synchrony and completeness of germination. The benefit of SMP was greatest for non-stratified seeds and the benefit decreased with length of moist chilling. In general, delaying planting for one week after SMP had minor effects on germination when seeds were kept in the SMP matrix at 4 degreesC. Delayed planting after SMP can increase germination rapidity and synchrony of seeds that have received long moist chilling and reduce the benefit of SMP in non-moist-chilled seeds.
Resumo:
O câncer de mama é a principal neoplasia maligna que acomete o sexo feminino no Brasil. O câncer de mama é hoje uma doença de extrema importância para a saúde pública nacional, motivando ampla discussão em torno das medidas que promova o seu diagnóstico precoce, a redução em sua morbidade e mortalidade. A presente pesquisa possui três objetivos, cujos resultados encontram-se organizados em artigos. O primeiro objetivo buscou analisar a completude dos dados do Sistema de Informação de Mortalidade sobre os óbitos por câncer de mama em mulheres no Espírito Santo, Sudeste e Brasil (1998 a 2007). Realizou-se um estudo descritivo analítico baseado em dados secundários, onde foi analisado o número absoluto e percentual de não preenchimento das variáveis nas declarações de óbitos. Adotou-se escore para avaliar os graus de não completude. Os resultados para as variáveis sexo e idade foram excelentes tanto para o Espírito Santo, Sudeste e Brasil. O preenchimento das variáveis raça/cor, grau de escolaridade e estado civil apresentam problemas no Espírito Santo. Enquanto no Sudeste e Brasil as variáveis raça/cor e escolaridade têm tendência decrescente para a não completude, no Espírito Santo a tendência se mantém estável. Para a variável estado civil, a não completude tem tendência crescente no Estado do Espírito Santo. O segundo objetivo foi analisar a evolução das taxas de mortalidade por câncer de mama, em mulheres no Espírito Santo no período de 1980 a 2007. Estudo de série temporal, cujos dados sobre óbitos foram obtidos do Sistema de Informação de Mortalidade e as estimativas populacionais segundo idade e anos-calendário, do Instituto Brasileiro Geografia e Estatística. Os coeficientes específicos 9 de mortalidade, segundo faixa etária, foram calculados anualmente. A análise de tendência foi realizada por meio da padronização das taxas de mortalidade pelo método direto, em que a população do senso IBGE-2000, foi considerada padrão. No período de estudo, ocorreram 2.736 óbitos por câncer de mama. O coeficiente de mortalidade neste período variou de 3,41 a 10,99 por 100.000 mulheres. Os resultados indicam que há tendência de mortalidade por câncer de mama ao longo da série (p=0,001 com crescimento de 75,42%). Todas as faixas etárias a partir de 30 anos apresentaram tendência de crescimento da mortalidade estatisticamente significante (p=0,001). Os percentuais de crescimento foram aumentando, segundo as idades mais avançadas, sendo 48,4% na faixa de 40 a 49 anos, chegando a 92,3%, na faixa de 80 anos e mais. O terceiro objetivo foi realizar a análise espacial dos óbitos em mulheres por câncer de mama no estado do Espírito Santo, nos anos de 2003 a 2007, com análise das correlações espaciais dessa mortalidade e componentes do município. O cenário foi o Estado do Espírito Santo, composto por 78 municípios. Para análise dos dados, utilizou-se a abordagem bayesiana (métodos EBest Global e EBest Local) para correção de taxas epidemiológicas. Calculou-se o índice I de Moran, para dependência espacial em nível global e a estatística Moran Local. As maiores taxas estão concentradas em 19 municípios pertencentes às Microrregiões: Metropolitana (Fundão, Vitória, Vila Velha, Viana, Cariacica e Guarapari), Metrópole Expandida Sul (Anchieta, Alfredo Chaves), Pólo Cachoeiro (Vargem Alta, Rio Novo do Sul, Mimoso do Sul, Cachoeiro de Itapemirim, Castelo, Jerônimo Monteiro, Bom Jesus do Norte, Apiacá e Muqui) e Caparaó (Alegre e São José do Calçado). Os resultados da Estimação Bayesiana (Índice de Moran) dos óbitos por câncer de mama em mulheres ocorridos no estado do Espírito Santo, segundo os dados brutos e 10 ajustados indicam a existência de correlação espacial significativa para o mapa Local (I = 0,573; p = 0,001) e Global (I = 0,118; p = 0,039). Os dados brutos não apresentam correlação espacial (I = 0,075; p = 0,142).
Resumo:
Este trabalho tem por objetivo discutir o surgimento de um programa de pesquisa na Ciência Econômica, no que concerne a análise das assimetrias de informação, as diferenças epistemológicas e as implicações em termos de equilíbrio ótimo de Pareto, em contraponto à abordagem neoclássica standard. Em busca de tal objetivo, foi necessário destacar o método de ambos os paradigmas; todavia, era igualmente necessário discutir a filosofia/epistemologia da ciência envolvida e que serviria de base para uma abordagem relacionada a mudanças paradigmáticas na ciência. No capítulo 1, discutimos a epistemologia da ciência, a partir de três autores: Popper, Kuhn e Lakatos. Definimos o conjunto de hipóteses que podem ser associadas ao método empregado pela Escola Neoclássica, a partir da filosofia da ciência proposta por Lakatos. Em seguida, no capítulo 2, fizemos uma longa exposição do método neoclássico, definindo os axiomas inerentes às preferências bem-comportadas, apresentando algebricamente o equilíbrio geral walrasiano, exemplificando o relaxamento de hipóteses auxiliares do modelo neoclássico a partir de Friedman e, por fim, aplicando o instrumental neoclássico ao relaxamento da hipótese auxiliar de perfeição da informação, a partir do modelo desenvolvido por Grossman & Stiglitz (1976), bem como da expansão matemática desenvolvida pelo presente trabalho. Finalmente, encerramos a presente dissertação com o capítulo 3, no qual, basicamente, expomos as principais contribuições de autores como Stiglitz, Akerlof e Arrow, no que concerne a mercados permeados por informações assimétricas e comportamentos oportunistas. Procuramos mostrar as consequências para o próprio mercado, chegando a resultados em que o mesmo era extinto. Apresentamos a segunda parte do modelo de Grossman & Stiglitz, enfatizando a natureza imperfeita do sistema de preços, sua incapacidade de transmitir todas as informações sobre os bens ao conjunto dos agentes, e, por fim, discutimos tópicos variados ligados à Economia da Informação.
Resumo:
Introdução: O câncer de próstata é o segundo tipo de câncer mais incidente em homens em todas as regiões do Brasil. Aproximadamente 62% dos casos diagnosticados no mundo ocorrem em homens com 65 anos ou mais, caracterizando o único fator de risco estabelecido. Objetivos: Estudar a tendência da completude do Sistema de Informação de Mortalidade (SIM), segundo as variáveis idade, raça/cor, escolaridade e estado civil no período de 2000 a 2010, no Espírito Santo, Região Sudeste e Brasil. Analisar a tendência de mortalidade por câncer de próstata na série histórica no estado do Espírito Santo (ES), no período de 1980 a 2010. Metodologia: Realizou-se um estudo descritivo baseado em dados secundários de todos os óbitos por câncer de próstata obtidos do SIM e dados do Instituto Brasileiro de Geografia e Estatística (IBGE) disponíveis no DATASUS departamento de informática do SUS (Sistema Único de Saúde), no ES, Região Sudeste e Brasil, no período de 1980 a 2010. Considerou-se as variáveis (idade, raça/cor, escolaridade e estado civil). Analisou-se o número absoluto e calculou-se o percentual de não preenchimento das informações das declarações de óbitos (DOs), que são a base de informação do SIM, nas localidades selecionadas (ES, Região Sudeste e Brasil). Analisou-se através do Pacote Estatístico para Ciências Sociais (SPSS), versão 18.0. Realizou-se uma análise inferencial com ajustes de curvas para os percentuais de dados faltantes das variáveis demográficas disponíveis no sistema do DATASUS (estado civil, escolaridade, raça/cor). E para a análise de tendência, foi realizado o cálculo do coeficiente de mortalidade por óbitos. As equações do melhor modelo e as estatísticas de ajuste (valor de R2 e o p-valor do teste F de adequação do modelo) foram obtidas do programa SPSS, versão 18.0. Resultados: No período de 2000 a 2010 a variável raça/cor, escolaridade, mostrou-se decrescente para o Brasil. A variável estado civil destacou-se por caracterizar uma tendência crescente no ES, Região Sudeste e Brasil. No período de 1980 a 2010 observou-se 3.561 óbitos no ES. Observa-se na série história que há tendência crescente de mortalidade por câncer de próstata. Conclusão: O trabalho é de grande importância para o estudo de câncer de próstata no Brasil. Identificou-se a crescente não completude dos campos de Estado Civil, enquanto a variável raça/cor foi considerada decrescente, porém com qualidade dos dados ruim. É preciso ações para que o processo de coleta dos dados seja aprimorado pela capacitação dos registradores. Nos resultados observou-se a tendência de crescimento da mortalidade, sendo necessárias ações, estratégias e políticas governamentais voltadas para a integralidade à saúde masculina.
Resumo:
In this paper we present a Constraint Logic Programming (CLP) based model, and hybrid solving method for the Scheduling of Maintenance Activities in the Power Transmission Network. The model distinguishes from others not only because of its completeness but also by the way it models and solves the Electric Constraints. Specifically we present a efficient filtering algorithm for the Electrical Constraints. Furthermore, the solving method improves the pure CLP methods efficiency by integrating a type of Local Search technique with CLP. To test the approach we compare the method results with another method using a 24 bus network, which considerers 42 tasks and 24 maintenance periods.