922 resultados para High-frequency data
Resumo:
Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.
Resumo:
We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved
Resumo:
Several studies indicate that molecular variants of HPV-16 have different geographic distribution and risk associated with persistent infection and development of high-grade cervical lesions. In the present study, the frequency of HPV-16 variants was determined in 81 biopsies from women with cervical intraepithelial neoplasia grade III or invasive cervical cancer from the city of Belem, Northern Brazil. Host DNAs were also genotyped in order to analyze the ethnicity-related distribution of these variants. Ninie different HPV-16 LCR variants belonging to four phylogenetic branches were identified. Among these, two new isolates were characterized. The most prevalent HPV-16 variant detected was the Asian-American B-2,followed by the European B-12 and the European prototype. Infections by multiple variants were observed in both invasive cervical cancer and cervical intraepithelial neoplasia grade III cases. The analysis of a specific polymorphism within the E6 viral gene was performed in a subset of 76 isolates. The E6-350G polymorphism was significantly more frequent in Asian-American variants. The HPV-16 variability detected followed the same pattern of the genetic ancestry observed in Northern Brazil, with European, Amerindian and African roots. Although African ancestry was higher among women infected by the prototype, no correlation between ethnical origin and HPV-16 variants was found. These results corroborate previous data showing a high frequency of Asian-American variants in cervical neoplasia among women with multiethnic origin.
Resumo:
To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.
Resumo:
This article analyses the processes of reducing language in textchats produced by non-native speakers of English. We propose that forms are reduced because of their high frequency and because of the discourse context. A wide variety of processes are attested in the literature, and we find different forms of clippings in our data, including mixtures of different clippings, homophone respellings, phonetic respellings including informal oral forms, initialisms (but no acronyms), and mixtures of clipping together with homophone and phonetic respellings. Clippings were the most frequent process (especially back-clippings and initialisms), followed by homophone respellings. There were different ways of metalinguistically marking reduction, but capitalisation was by far the most frequent. There is much individual variation in the frequencies of the different processes, although most were within normal distribution. The fact that nonnative speakers seem to generally follow reduction patterns of native speakers suggests that reduction is a universal process.
Resumo:
O presente trabalho analisa os reservatórios turbidíticos do Campo de Namorado, Bacia de Campos – RJ, com a apresentação de um novo modelo evolutivo para o intervalo entre o Albiano superior e Cenomaniano, na área do referido campo. As ferramentas utilizadas neste estudo consistiram da interpretação sísmica em ambiente tridimensional com o software VoxelGeo®, e da análise faciológica junto à perfilagem de poços do referido campo. A análise desenvolvida permitiu a individualização e a posterior visualização tridimensional de um paleocanal meandrante na base do intervalo estudado, feição esta até então não relatada em interpretações anteriores neste reservatório. Como resultado das análises sísmicas e faciológicas, foi possível elaborar um modelo deposicional, onde foram definidos quatro sistemas turbidíticos distintos, inclusos em duas seqüências de 3ª Ordem. Esses sistemas turbidíticos estariam, portanto, associados às seqüências de 4ª Ordem, que são interpretadas como parasseqüências, inseridas nos dois ciclos de 3ª Ordem. As seqüências de 3ª Ordem, que englobam os reservatórios do Campo de Namorado, representariam intervalos de alta freqüência no registro estratigráfico, dentro do contexto de afogamento (2ª Ordem) da Bacia de Campos. Pelas características da calha deposicional observada para o Campo de Namorado, é possível concluir que o sistema, como um todo, foi depositado em um complexo de canais, junto a sistemas de frentes deltaicas. Esses canais, provavelmente, foram esculpidos por fluxos hiperpicnais, formados a partir de inundações catastróficas. As informações provenientes deste estudo, proporcionaram uma melhor compreensão da gênese dos depósitos turbidíticos, acumuladores de hidrocarbonetos, no intervalo estudado, e cuja ocorrência está relacionada com etapas de rebaixamento relativo do nível do mar.
Resumo:
Esse artigo estabelece uma base para pesquisas que tratam da relação entre pobreza, distribuição de recursos e operação do mercado de capitais no Brasil. O principal objetivo é auxiliar a implementação de políticas de reforço de capital dos pobres. A disponibilidade de novas fontes de dados abriu condições inéditas para implementar uma análise de posse de ativos e pobreza nas áreas metropolitanas brasileiras. A avaliação de distribuição de recursos foi estruturada sobre três itens: Capital físico, capital humano e capital social. A estratégia empírica seguida é de analisar três diferentes tipos de impactos que o aumento dos ativos dos pobres podem exercer no nível de bem estar social. A primeira parte do artigo avalia a posse de diferentes tipos de capitais através da distribuição de renda. Esse exercício pode ser encarado como uma ampliação de medidas de pobreza baseadas em renda pela incorporação de efeitos diretos exercidos pela posse de ativos no bem estar social. A segunda parte do artigo descreve o impacto de geração de renda que a posse de ativos pode ter sobre os pobres. Estudamos como a acumulação de diferentes tipos de capital impactam os índices de pobreza baseados na renda usando regressões logísticas. A terceira parte estuda o efeito que o aumento da posse de ativos dos pobres tem no melhoramento da habilidade dos indivíduos pobres em lidar com choques adversos da renda. Estudamos a interação entre a dinâmica da renda, imperfeições do mercado de capitais e comportamentos financeiros levando em consideração diferentes horizontes de tempo. As questões de longo prazo estão relacionadas com o estudo das flutuações de renda de baixa freqüência e ciclo da vida da posse de ativos usando análise de coorte. As questões de curto prazo estão relacionadas com o comportamento do pobre e as perdas de bem estar ao lidar com hiatos de alta freqüência entre renda e consumo desejado. A análise da dinâmica de renda e pobreza é conduzida a partir da combinação de dados de painel de renda com dados qualitativos sobre comportamento financeiro de curto prazo das famílias.
Resumo:
Using intraday data for the most actively traded stocks on the São Paulo Stock Market (BOVESPA) index, this study considers two recently developed models from the literature on the estimation and prediction of realized volatility: the Heterogeneous Autoregressive Model of Realized Volatility (HAR-RV), developed by Corsi (2009), and the Mixed Data Sampling model (MIDAS-RV), developed by Ghysels et al. (2004). Using measurements to compare in-sample and out-of-sample forecasts, better results were obtained with the MIDAS-RV model for in-sample forecasts. For out-of-sample forecasts, however, there was no statistically signi cant di¤erence between the models. We also found evidence that the use of realized volatility induces distributions of standardized returns that are closer to normal
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
Com o objetivo de avaliar a importância da eletrocardiografia de alta resolução no diagnóstico da cardiomiopatia arritmogênica do ventrículo direito do Boxer, 20 cães sem evidências de doença cardíaca estrutural à avaliação ecodopplercardiográfica foram agrupados de acordo com a frequência de arritmias ventriculares, avaliadas pela eletrocardiografia ambulatorial de 24 horas, e submetidos ao exame eletrocardiográfico de alta resolução. Duração do complexo QRS filtrado, duração dos sinais de baixa amplitude (menor que 40µV) dos últimos 40 milissegundos do complexo QRS e raiz quadrada média da voltagem ao quadrado dos últimos 40 milissegundos do complexo QRS (RMS40) foram as variáveis avaliadas. Não foram observadas diferenças significativas entre os grupos em relação às variáveis estudadas. Sendo assim, os resultados do presente estudo sugerem que a eletrocardiografia de alta resolução não é uma ferramenta útil no auxílio diagnóstico da cardiomiopatia arritmogênica do ventrículo direito dos cães da raça Boxer que não apresentam alterações miocárdicas evidentes ou disfunção sistólica.
Resumo:
Nowadays there has been a major breakthrough in the aerospace area, with regard to rocket launches to research, experiments, telemetry system, remote sensing, radar system (tracking and monitoring), satellite communications system and insertion of satellites in orbit. This work aims at the application of a circular cylindrical microstrip antenna, ring type, and other cylindrical rectangular in structure of a rocket or missile to obtain telemetry data, operating in the range of 2 to 4 GHz, in S-band. Throughout this was developed just the theoretical analysis of the Transverse transmission line method which is a method of rigorous analysis in spectral domain, for use in rockets and missiles. This analyzes the spread in the direction "ρ" , transverse to dielectric interfaces "z" and "φ", for cylindrical coordinates, thus taking the general equations of electromagnetic fields in function of e [1]. It is worth mentioning that in order to obtain results, simulations and analysis of the structure under study was used HFSS program (High Frequency Structural Simulator) that uses the finite element method. With the theory developed computational resources were used to obtain the numerical calculations, using Fortran Power Station, Scilab and Wolfram Mathematica ®. The prototype was built using, as a substrate, the ULTRALAM ® 3850, of Rogers Corporation, and an aluminum plate as a cylindrical structure used to support. The agreement between the measured and simulated results validate the established processes. Conclusions and suggestions are presented for continuing this work
Resumo:
In the absence of the selective availability, which was turned off on May 1, 2000, the ionosphere can be the largest source of error in GPS positioning and navigation. Its effects on GPS observable cause a code delays and phase advances. The magnitude of this error is affected by the local time of the day, season, solar cycle, geographical location of the receiver and Earth's magnetic field. As it is well known, the ionosphere is the main drawback for high accuracy positioning, when using single frequency receivers, either for point positioning or relative positioning of medium and long baselines. The ionosphere effects were investigated in the determination of point positioning and relative positioning using single frequency data. A model represented by a Fourier series type was implemented and the parameters were estimated from data collected at the active stations of RBMC (Brazilian Network for Continuous Monitoring of GPS satellites). The data input were the pseudorange observables filtered by the carrier phase. Quality control was implemented in order to analyse the adjustment and to validate the significance of the estimated parameters. Experiments were carried out in the equatorial region, using data collected from dual frequency receivers. In order to validate the model, the estimated values were compared with ground truth. For point and relative positioning of baselines of approximately 100 km, the values of the discrepancies indicated an error reduction better than 80% and 50% respectively, compared to the processing without the ionospheric model. These results give an indication that more research has to be done in order to provide support to the L1 GPS users in the Equatorial region.
Resumo:
When GNSS receivers capable of collecting dual-frequency data are available, it is possible to eliminate the first-order ionospheric effect in the data processing through the ionosphere-free linear combination. However, the second- and third-order ionospheric effects still remain. The first-, second- and third-order ionospheric effects are directly proportional to the total electron content (TEC), although the second- and third-order effects are influenced, respectively, by the geomagnetic field and the maximum electron density. In recent years, the international scientific community has given more attention to these kinds of effects and some works have shown that for high precision GNSS positioning these effects have to be taken into consideration. We present a software tool called RINEX_HO that was developed to correct GPS observables for second- and third-order ionosphere effects. RINEX_HO requires as input a RINEX observation file, then computes the second- and third-order ionospheric effects, and applies the corrections to the original GPS observables, creating a corrected RINEX file. The mathematical models implemented to compute these effects are presented, as well as the transformations involving the earth's magnetic field. The use of TEC from global ionospheric maps and TEC calculated from raw pseudorange measurements or pseudoranges smoothed by phase is also investigated.
Resumo:
INTRODUÇÃO: Os níveis de exigência biomecânica devidos ao alto grau de dificuldade na realização de gestos fazem da ginástica artística (GA) uma modalidade com elevado risco de lesões. Assim, é necessário que os aspectos a elas relacionados sejam controlados. OBJETIVO: Analisar a ocorrência de lesões na Ginástica Artística, associando-as a fatores de risco específicos da modalidade e do atleta, a partir de inquérito de morbidade referida. MÉTODOS: Foram entrevistados 54 ginastas, recrutados ao acaso, classificados segundo o nível competitivo em duas categorias: regional e nacional. Utilizou-se o inquérito de morbidade referida (IMR) com a finalidade de reunir dados sobre a natureza da lesão, região corporal e aparelho ginástico. Os dados foram organizados e apresentados sob a forma de distribuição de freqüências e as variáveis, analisadas segundo nível de associação a partir do teste de Goodman para contrastes entre populações multinomiais, considerando significante o valor P < 0,05. RESULTADOS: Presença de lesão durante a temporada foi relatada por 39 (71,70%) atletas, sendo 22 (56,41%) mulheres e 17 (43,59%) homens. Nas categorias regional masculino e feminino e nacional feminino, a maior ocorrência de lesões foi de origem articular, correspondendo a 55,56%, 50% e 45,45% do total, respectivamente. Para o sexo feminino nacional, os membros inferiores foram os mais referidos (68,18%) e, em ambas as categorias, as lesões ocorreram nos aparelhos de saltos (79,41%), enquanto que no sexo masculino nacional o maior número de agravos foi verificado nos aparelhos de apoio e suspensão (72%). CONCLUSÕES: Há elevada freqüência de lesões, acometendo principalmente articulações e membros inferiores, sendo os aparelhos de saltos os mais referidos quanto à ocorrência de acometimentos. Foi observado também que, quanto maiores as exigências de desempenho técnico, maior a freqüência de lesões.
Resumo:
O objetivo deste estudo foi avaliar o efeito da técnica de oscilação oral de alta freqüência (com o aparelho Shaker), aplicada em diferentes pressões expiratórias (PE), sobre a função autonômica e parâmetros cardiorrespiratórios. Foram coletados dados de 20 voluntários jovens saudáveis (21,6±1,3 anos), que permaneceram em repouso inicial por 10 minutos e, em seguida, fizeram três séries de dez expirações no aparelho (com intervalo de descanso de 2 minutos entre as séries) em três diferentes PE - pressão livre (PL), de 10 (P10) e de 20 (P20) cmH2O - permanecendo por mais 10 minutos em repouso final. Os dados foram analisados estatisticamente, com nível de significância de 5%. Após a aplicação da técnica, constatou-se diferença significante nos índices de variabilidade da freqüência cardíaca em PL e um aumento significante na pressão arterial sistólica em P20. Na pressão arterial diastólica, freqüência respiratória e saturação periférica de oxigênio não foram encontradas diferenças antes, durante e após a técnica, nas diferentes PE. A percepção do esforço aumentou significantemente ao longo das séries em PL e P20 e entre P10 e P20 em cada série. A freqüência cardíaca (FC) aumentou e diminuiu em sincronia com os movimentos de inspiração e expiração, respectivamente. Foram observadas modificações na modulação autonômica do coração em PL. A aplicação da técnica nessa população, nas diferentes PE analisadas, promoveu modificações no comportamento da FC, no esforço percebido e, em PL, na modulação autonômica do coração.