950 resultados para High frequency data
Resumo:
A severe complication of spinal cord injury is loss of bladder function (neurogenic bladder), which is characterized by loss of bladder sensation and voluntary control of micturition (urination), and spontaneous hyperreflexive voiding against a closed sphincter (detrusor-sphincter dyssynergia). A sacral anterior root stimulator at low frequency can drive volitional bladder voiding, but surgical rhizotomy of the lumbosacral dorsal roots is needed to prevent spontaneous voiding and dyssynergia. However, rhizotomy is irreversible and eliminates sexual function, and the stimulator gives no information on bladder fullness. We designed a closed-loop neuroprosthetic interface that measures bladder fullness and prevents spontaneous voiding episodes without the need for dorsal rhizotomy in a rat model. To obtain bladder sensory information, we implanted teased dorsal roots (rootlets) within the rat vertebral column into microchannel electrodes, which provided signal amplification and noise suppression. As long as they were attached to the spinal cord, these rootlets survived for up to 3 months and contained axons and blood vessels. Electrophysiological recordings showed that half of the rootlets propagated action potentials, with firing frequency correlated to bladder fullness. When the bladder became full enough to initiate spontaneous voiding, high-frequency/amplitude sensory activity was detected. Voiding was abolished using a high-frequency depolarizing block to the ventral roots. A ventral root stimulator initiated bladder emptying at low frequency and prevented unwanted contraction at high frequency. These data suggest that sensory information from the dorsal root together with a ventral root stimulator could form the basis for a closed-loop bladder neuroprosthetic. Copyright © 2013, American Association for the Advancement of Science
Resumo:
We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.
Resumo:
Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.
Resumo:
In the past three decades, Brazil has undergone rapid changes in major social determinants of health and in the organisation of health services. In this report, we examine how these changes have affected indicators of maternal health, child health, and child nutrition. We use data from vital statistics, population censuses, demographic and health surveys, and published reports. In the past three decades, infant mortality rates have reduced substantially, decreasing by 5.5% a year in the 1980s and 1990s, and by 4.4% a year since 2000 to reach 20 deaths per 1000 livebirths in 2008. Neonatal deaths account for 68% of infant deaths. Stunting prevalence among children younger than 5 years decreased from 37% in 1974-75 to 7% in 2006-07. Regional differences in stunting and child mortality also decreased. Access to most maternal-health and child-health interventions increased sharply to almost universal coverage, and regional and socioeconomic inequalities in access to such interventions were notably reduced. The median duration of breastfeeding increased from 2.5 months in the 1970s to 14 months by 2006-07. Official statistics show stable maternal mortality ratios during the past 10 years, but modelled data indicate a yearly decrease of 4%, a trend which might not have been noticeable in official reports because of improvements in death registration and the increased number of investigations into deaths of women of reproductive age. The reasons behind Brazil`s progress include: socioeconomic and demographic changes (economic growth, reduction in income disparities between the poorest and wealthiest populations, urbanisation, improved education of women, and decreased fertility rates), interventions outside the health sector (a conditional cash transfer programme and improvements in water and sanitation), vertical health programmes in the 1980s (promotion of breastfeeding, oral rehydration, and immunisations), creation of a tax-funded national health service in 1988 (coverage of which expanded to reach the poorest areas of the country through the Family Health Program in the mid-1990s); and implementation of many national and state-wide programmes to improve child health and child nutrition and, to a lesser extent, to promote women`s health. Nevertheless, substantial challenges remain, including overmedicalisation of childbirth (nearly 50% of babies are delivered by caesarean section), maternal deaths caused by illegal abortions, and a high frequency of preterm deliveries.
Resumo:
We report the analysis of a uniform sample of 31 light curves of the nova-like variable UU Aqr with eclipse-mapping techniques. The data were combined to derive eclipse maps of the average steady-light component, the long-term brightness changes, and the low- and high-frequency flickering components. The long-term variability responsible for the ""low-brightness`` and ""high-brightness`` states is explained in terms of the response of a viscous disk to changes of 20%-50% in the mass transfer rate from the donor star. Low- and high-frequency flickering maps are dominated by emission from two asymmetric arcs reminiscent of those seen in the outbursting dwarf nova IP Peg, and they are similarly interpreted as manifestations of a tidally induced spiral shock wave in the outer regions of a large accretion disk. The asymmetric arcs are also seen in the map of the steady light aside from the broad brightness distribution of a roughly steady-state disk. The arcs account for 25% of the steady-light flux and are a long-lasting feature in the accretion disk of UU Aqr. We infer an opening angle of 10 degrees +/- 3 degrees for the spiral arcs. The results suggest that the flickering in UU Aqr is caused by turbulence generated after the collision of disk gas with the density-enhanced spiral wave in the accretion disk.
Resumo:
Sea surface gradients derived from the Geosat and ERS-1 satellite altimetry geodetic missions were integrated with marine gravity data from the National Geophysical Data Center and Brazilian national surveys. Using the least squares collocation method, models of free-air gravity anomaly and geoid height were calculated for the coast of Brazil with a resolution of 2` x 2`. The integration of satellite and shipborne data showed better statistical results in regions near the coast than using satellite data only, suggesting an improvement when compared to the state-of-the-art global gravity models. Furthermore, these results were obtained with considerably less input information than was used by those reference models. The least squares collocation presented a very low content of high-frequency noise in the predicted gravity anomalies. This may be considered essential to improve the high resolution representation of the gravity field in regions of ocean-continent transition. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.
Resumo:
We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved
Resumo:
Several studies indicate that molecular variants of HPV-16 have different geographic distribution and risk associated with persistent infection and development of high-grade cervical lesions. In the present study, the frequency of HPV-16 variants was determined in 81 biopsies from women with cervical intraepithelial neoplasia grade III or invasive cervical cancer from the city of Belem, Northern Brazil. Host DNAs were also genotyped in order to analyze the ethnicity-related distribution of these variants. Ninie different HPV-16 LCR variants belonging to four phylogenetic branches were identified. Among these, two new isolates were characterized. The most prevalent HPV-16 variant detected was the Asian-American B-2,followed by the European B-12 and the European prototype. Infections by multiple variants were observed in both invasive cervical cancer and cervical intraepithelial neoplasia grade III cases. The analysis of a specific polymorphism within the E6 viral gene was performed in a subset of 76 isolates. The E6-350G polymorphism was significantly more frequent in Asian-American variants. The HPV-16 variability detected followed the same pattern of the genetic ancestry observed in Northern Brazil, with European, Amerindian and African roots. Although African ancestry was higher among women infected by the prototype, no correlation between ethnical origin and HPV-16 variants was found. These results corroborate previous data showing a high frequency of Asian-American variants in cervical neoplasia among women with multiethnic origin.
Resumo:
To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.
Resumo:
This article analyses the processes of reducing language in textchats produced by non-native speakers of English. We propose that forms are reduced because of their high frequency and because of the discourse context. A wide variety of processes are attested in the literature, and we find different forms of clippings in our data, including mixtures of different clippings, homophone respellings, phonetic respellings including informal oral forms, initialisms (but no acronyms), and mixtures of clipping together with homophone and phonetic respellings. Clippings were the most frequent process (especially back-clippings and initialisms), followed by homophone respellings. There were different ways of metalinguistically marking reduction, but capitalisation was by far the most frequent. There is much individual variation in the frequencies of the different processes, although most were within normal distribution. The fact that nonnative speakers seem to generally follow reduction patterns of native speakers suggests that reduction is a universal process.
Resumo:
O presente trabalho analisa os reservatórios turbidíticos do Campo de Namorado, Bacia de Campos – RJ, com a apresentação de um novo modelo evolutivo para o intervalo entre o Albiano superior e Cenomaniano, na área do referido campo. As ferramentas utilizadas neste estudo consistiram da interpretação sísmica em ambiente tridimensional com o software VoxelGeo®, e da análise faciológica junto à perfilagem de poços do referido campo. A análise desenvolvida permitiu a individualização e a posterior visualização tridimensional de um paleocanal meandrante na base do intervalo estudado, feição esta até então não relatada em interpretações anteriores neste reservatório. Como resultado das análises sísmicas e faciológicas, foi possível elaborar um modelo deposicional, onde foram definidos quatro sistemas turbidíticos distintos, inclusos em duas seqüências de 3ª Ordem. Esses sistemas turbidíticos estariam, portanto, associados às seqüências de 4ª Ordem, que são interpretadas como parasseqüências, inseridas nos dois ciclos de 3ª Ordem. As seqüências de 3ª Ordem, que englobam os reservatórios do Campo de Namorado, representariam intervalos de alta freqüência no registro estratigráfico, dentro do contexto de afogamento (2ª Ordem) da Bacia de Campos. Pelas características da calha deposicional observada para o Campo de Namorado, é possível concluir que o sistema, como um todo, foi depositado em um complexo de canais, junto a sistemas de frentes deltaicas. Esses canais, provavelmente, foram esculpidos por fluxos hiperpicnais, formados a partir de inundações catastróficas. As informações provenientes deste estudo, proporcionaram uma melhor compreensão da gênese dos depósitos turbidíticos, acumuladores de hidrocarbonetos, no intervalo estudado, e cuja ocorrência está relacionada com etapas de rebaixamento relativo do nível do mar.
Resumo:
Esse artigo estabelece uma base para pesquisas que tratam da relação entre pobreza, distribuição de recursos e operação do mercado de capitais no Brasil. O principal objetivo é auxiliar a implementação de políticas de reforço de capital dos pobres. A disponibilidade de novas fontes de dados abriu condições inéditas para implementar uma análise de posse de ativos e pobreza nas áreas metropolitanas brasileiras. A avaliação de distribuição de recursos foi estruturada sobre três itens: Capital físico, capital humano e capital social. A estratégia empírica seguida é de analisar três diferentes tipos de impactos que o aumento dos ativos dos pobres podem exercer no nível de bem estar social. A primeira parte do artigo avalia a posse de diferentes tipos de capitais através da distribuição de renda. Esse exercício pode ser encarado como uma ampliação de medidas de pobreza baseadas em renda pela incorporação de efeitos diretos exercidos pela posse de ativos no bem estar social. A segunda parte do artigo descreve o impacto de geração de renda que a posse de ativos pode ter sobre os pobres. Estudamos como a acumulação de diferentes tipos de capital impactam os índices de pobreza baseados na renda usando regressões logísticas. A terceira parte estuda o efeito que o aumento da posse de ativos dos pobres tem no melhoramento da habilidade dos indivíduos pobres em lidar com choques adversos da renda. Estudamos a interação entre a dinâmica da renda, imperfeições do mercado de capitais e comportamentos financeiros levando em consideração diferentes horizontes de tempo. As questões de longo prazo estão relacionadas com o estudo das flutuações de renda de baixa freqüência e ciclo da vida da posse de ativos usando análise de coorte. As questões de curto prazo estão relacionadas com o comportamento do pobre e as perdas de bem estar ao lidar com hiatos de alta freqüência entre renda e consumo desejado. A análise da dinâmica de renda e pobreza é conduzida a partir da combinação de dados de painel de renda com dados qualitativos sobre comportamento financeiro de curto prazo das famílias.