922 resultados para High-frequency data
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Although extensively studied within the lidar community, the multiple scattering phenomenon has always been considered a rare curiosity by radar meteorologists. Up to few years ago its appearance has only been associated with two- or three-body-scattering features (e.g. hail flares and mirror images) involving highly reflective surfaces. Recent atmospheric research aimed at better understanding of the water cycle and the role played by clouds and precipitation in affecting the Earth's climate has driven the deployment of high frequency radars in space. Examples are the TRMM 13.5 GHz, the CloudSat 94 GHz, the upcoming EarthCARE 94 GHz, and the GPM dual 13-35 GHz radars. These systems are able to detect the vertical distribution of hydrometeors and thus provide crucial feedbacks for radiation and climate studies. The shift towards higher frequencies increases the sensitivity to hydrometeors, improves the spatial resolution and reduces the size and weight of the radar systems. On the other hand, higher frequency radars are affected by stronger extinction, especially in the presence of large precipitating particles (e.g. raindrops or hail particles), which may eventually drive the signal below the minimum detection threshold. In such circumstances the interpretation of the radar equation via the single scattering approximation may be problematic. Errors will be large when the radiation emitted from the radar after interacting more than once with the medium still contributes substantially to the received power. This is the case if the transport mean-free-path becomes comparable with the instrument footprint (determined by the antenna beam-width and the platform altitude). This situation resembles to what has already been experienced in lidar observations, but with a predominance of wide- versus small-angle scattering events. At millimeter wavelengths, hydrometeors diffuse radiation rather isotropically compared to the visible or near infrared region where scattering is predominantly in the forward direction. A complete understanding of radiation transport modeling and data analysis methods under wide-angle multiple scattering conditions is mandatory for a correct interpretation of echoes observed by space-borne millimeter radars. This paper reviews the status of research in this field. Different numerical techniques currently implemented to account for higher order scattering are reviewed and their weaknesses and strengths highlighted. Examples of simulated radar backscattering profiles are provided with particular emphasis given to situations in which the multiple scattering contributions become comparable or overwhelm the single scattering signal. We show evidences of multiple scattering effects from air-borne and from CloudSat observations, i.e. unique signatures which cannot be explained by single scattering theory. Ideas how to identify and tackle the multiple scattering effects are discussed. Finally perspectives and suggestions for future work are outlined. This work represents a reference-guide for studies focused at modeling the radiation transport and at interpreting data from high frequency space-borne radar systems that probe highly opaque scattering media such as thick ice clouds or precipitating clouds.
Resumo:
In situ precipitation measurements can extremely differ in space and time. Taking into account the limited spatial–temporal representativity and the uncertainty of a single station is important for validating mesoscale numerical model results as well as for interpreting remote sensing data. In situ precipitation data from a high resolution network in North-Eastern Germany are analysed to determine their temporal and spatial representativity. For the dry year 2003 precipitation amounts were available with 10 min resolution from 14 rain gauges distributed in an area of 25 km 25 km around the Meteorological Observatory Lindenberg (Richard-Aßmann Observatory). Our analysis reveals that short-term (up to 6 h) precipitation events dominate (94% of all events) and that the distribution is skewed with a high frequency of very low precipitation amounts. Long-lasting precipitation events are rare (6% of all precipitation events), but account for nearly 50% of the annual precipitation. The spatial representativity of a single-site measurement increases slightly for longer measurement intervals and the variability decreases. Hourly precipitation amounts are representative for an area of 11 km 11 km. Daily precipitation amounts appear to be reliable with an uncertainty factor of 3.3 for an area of 25 km 25 km, and weekly and monthly precipitation amounts have uncertainties of a factor of 2 and 1.4 when compared to 25 km 25 km mean values.
Resumo:
A new boundary integral operator is introduced for the solution of the soundsoft acoustic scattering problem, i.e., for the exterior problem for the Helmholtz equation with Dirichlet boundary conditions. We prove that this integral operator is coercive in L2(Γ) (where Γ is the surface of the scatterer) for all Lipschitz star-shaped domains. Moreover, the coercivity is uniform in the wavenumber k = ω/c, where ω is the frequency and c is the speed of sound. The new boundary integral operator, which we call the “star-combined” potential operator, is a slight modification of the standard combined potential operator, and is shown to be as easy to implement as the standard one. Additionally, to the authors' knowledge, it is the only second-kind integral operator for which convergence of the Galerkin method in L2(Γ) is proved without smoothness assumptions on Γ except that it is Lipschitz. The coercivity of the star-combined operator implies frequency-explicit error bounds for the Galerkin method for any approximation space. In particular, these error estimates apply to several hybrid asymptoticnumerical methods developed recently that provide robust approximations in the high-frequency case. The proof of coercivity of the star-combined operator critically relies on an identity first introduced by Morawetz and Ludwig in 1968, supplemented further by more recent harmonic analysis techniques for Lipschitz domains.
Resumo:
Aim: To develop a list of prescribing indicators specific for the hospital setting that would facilitate the prospective collection of high severity and/or high frequency prescribing errors, which are also amenable to electronic clinical decision support (CDS). Method: A three-stage consensus technique (electronic Delphi) was carried out with 20 expert pharmacists and physicians across England. Participants were asked to score prescribing errors using a 5-point Likert scale for their likelihood of occurrence and the severity of the most likely outcome. These were combined to produce risk scores, from which median scores were calculated for each indicator across the participants in the study. The degree of consensus between the participants was defined as the proportion that gave a risk score in the same category as the median. Indicators were included if a consensus of 80% or more was achieved. Results: A total of 80 prescribing errors were identified by consensus as being high or extreme risk. The most common drug classes named within the indicators were antibiotics (n=13), antidepressants (n=8), nonsteroidal anti-inflammatory drugs (n=6), and opioid analgesics (n=6).The most frequent error type identified as high or extreme risk were those classified as clinical contraindications (n=29/80). Conclusion: 80 high risk prescribing errors in the hospital setting have been identified by an expert panel. These indicators can serve as the basis for a standardised, validated tool for the collection of data in both paperbased and electronic prescribing processes, as well as to assess the impact of electronic decision support implementation or development.
Resumo:
This chapter highlights similarities and differences of equity and fixed- income markets and provides an overview of the characteristics of European government bond market trading and liquidity. Most existing studies focus on the U.S. market. This chapter presents the institutional details of the MTS market, which is the largest European electronic platform for trading government, quasi-government, asset- backed, and corporate fixed- income securities. It reviews the main features of high- frequency fixed- income data and the methods for measuring market liquidity. Finally, the chapter shows how liquidity differs across European countries, how liquidity varies with the structure of the market, and how liquidity has changed during the recent liquidity and sovereign crises.
Resumo:
JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.
Resumo:
A severe complication of spinal cord injury is loss of bladder function (neurogenic bladder), which is characterized by loss of bladder sensation and voluntary control of micturition (urination), and spontaneous hyperreflexive voiding against a closed sphincter (detrusor-sphincter dyssynergia). A sacral anterior root stimulator at low frequency can drive volitional bladder voiding, but surgical rhizotomy of the lumbosacral dorsal roots is needed to prevent spontaneous voiding and dyssynergia. However, rhizotomy is irreversible and eliminates sexual function, and the stimulator gives no information on bladder fullness. We designed a closed-loop neuroprosthetic interface that measures bladder fullness and prevents spontaneous voiding episodes without the need for dorsal rhizotomy in a rat model. To obtain bladder sensory information, we implanted teased dorsal roots (rootlets) within the rat vertebral column into microchannel electrodes, which provided signal amplification and noise suppression. As long as they were attached to the spinal cord, these rootlets survived for up to 3 months and contained axons and blood vessels. Electrophysiological recordings showed that half of the rootlets propagated action potentials, with firing frequency correlated to bladder fullness. When the bladder became full enough to initiate spontaneous voiding, high-frequency/amplitude sensory activity was detected. Voiding was abolished using a high-frequency depolarizing block to the ventral roots. A ventral root stimulator initiated bladder emptying at low frequency and prevented unwanted contraction at high frequency. These data suggest that sensory information from the dorsal root together with a ventral root stimulator could form the basis for a closed-loop bladder neuroprosthetic. Copyright © 2013, American Association for the Advancement of Science
Resumo:
We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.
Resumo:
Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.
Resumo:
In the past three decades, Brazil has undergone rapid changes in major social determinants of health and in the organisation of health services. In this report, we examine how these changes have affected indicators of maternal health, child health, and child nutrition. We use data from vital statistics, population censuses, demographic and health surveys, and published reports. In the past three decades, infant mortality rates have reduced substantially, decreasing by 5.5% a year in the 1980s and 1990s, and by 4.4% a year since 2000 to reach 20 deaths per 1000 livebirths in 2008. Neonatal deaths account for 68% of infant deaths. Stunting prevalence among children younger than 5 years decreased from 37% in 1974-75 to 7% in 2006-07. Regional differences in stunting and child mortality also decreased. Access to most maternal-health and child-health interventions increased sharply to almost universal coverage, and regional and socioeconomic inequalities in access to such interventions were notably reduced. The median duration of breastfeeding increased from 2.5 months in the 1970s to 14 months by 2006-07. Official statistics show stable maternal mortality ratios during the past 10 years, but modelled data indicate a yearly decrease of 4%, a trend which might not have been noticeable in official reports because of improvements in death registration and the increased number of investigations into deaths of women of reproductive age. The reasons behind Brazil`s progress include: socioeconomic and demographic changes (economic growth, reduction in income disparities between the poorest and wealthiest populations, urbanisation, improved education of women, and decreased fertility rates), interventions outside the health sector (a conditional cash transfer programme and improvements in water and sanitation), vertical health programmes in the 1980s (promotion of breastfeeding, oral rehydration, and immunisations), creation of a tax-funded national health service in 1988 (coverage of which expanded to reach the poorest areas of the country through the Family Health Program in the mid-1990s); and implementation of many national and state-wide programmes to improve child health and child nutrition and, to a lesser extent, to promote women`s health. Nevertheless, substantial challenges remain, including overmedicalisation of childbirth (nearly 50% of babies are delivered by caesarean section), maternal deaths caused by illegal abortions, and a high frequency of preterm deliveries.
Resumo:
We report the analysis of a uniform sample of 31 light curves of the nova-like variable UU Aqr with eclipse-mapping techniques. The data were combined to derive eclipse maps of the average steady-light component, the long-term brightness changes, and the low- and high-frequency flickering components. The long-term variability responsible for the ""low-brightness`` and ""high-brightness`` states is explained in terms of the response of a viscous disk to changes of 20%-50% in the mass transfer rate from the donor star. Low- and high-frequency flickering maps are dominated by emission from two asymmetric arcs reminiscent of those seen in the outbursting dwarf nova IP Peg, and they are similarly interpreted as manifestations of a tidally induced spiral shock wave in the outer regions of a large accretion disk. The asymmetric arcs are also seen in the map of the steady light aside from the broad brightness distribution of a roughly steady-state disk. The arcs account for 25% of the steady-light flux and are a long-lasting feature in the accretion disk of UU Aqr. We infer an opening angle of 10 degrees +/- 3 degrees for the spiral arcs. The results suggest that the flickering in UU Aqr is caused by turbulence generated after the collision of disk gas with the density-enhanced spiral wave in the accretion disk.
Resumo:
Sea surface gradients derived from the Geosat and ERS-1 satellite altimetry geodetic missions were integrated with marine gravity data from the National Geophysical Data Center and Brazilian national surveys. Using the least squares collocation method, models of free-air gravity anomaly and geoid height were calculated for the coast of Brazil with a resolution of 2` x 2`. The integration of satellite and shipborne data showed better statistical results in regions near the coast than using satellite data only, suggesting an improvement when compared to the state-of-the-art global gravity models. Furthermore, these results were obtained with considerably less input information than was used by those reference models. The least squares collocation presented a very low content of high-frequency noise in the predicted gravity anomalies. This may be considered essential to improve the high resolution representation of the gravity field in regions of ocean-continent transition. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.