129 resultados para Data Streams Distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anxiety disorders are increasingly acknowledged as a global health issue however an accurate picture of prevalence across populations is lacking. Empirical data are incomplete and inconsistent so alternate means of estimating prevalence are required to inform estimates for the new Global Burden of Disease Study 2010. We used a Bayesian meta-regression approach which included empirical epidemiological data, expert prior information, study covariates and population characteristics. Reported are global and regional point prevalence for anxiety disorders in 2010. Point prevalence of anxiety disorders differed by up to three-fold across world regions, ranging between 2.1% (1.8-2.5%) in East Asia and 6.1% (5.1-7.4%) in North Africa/Middle East. Anxiety was more common in Latin America; high income regions; and regions with a history of recent conflict. There was considerable uncertainty around estimates, particularly for regions where no data were available. Future research is required to examine whether variations in regional distributions of anxiety disorders are substantive differences or an artefact of cultural or methodological differences. This is a particular imperative where anxiety is consistently reported to be less common, and where it appears to be elevated, but uncertainty prevents the reporting of conclusive estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses voltage violation problem, the most critical issue associated with high level penetration of photovoltaic (PV) in electricity distribution network. A coordinated control algorithm using the reactive power from PV inverter and integrated battery energy storage has been developed and investigated in different network scenarios in the thesis. Probable variations associated with solar generation, end-user participation and network parameters are also considered. Furthermore, a unified data model and well-defined communication protocol to ensure the smooth coordination between all the components during the operation of the algorithm is described. Finally this thesis incorporated the uncertainties of solar generation using probabilistic load flow analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An expanding education market targeted through ‘bridging material’ enabling cineliteracies has the potential to offer Australian producers with increased distribution opportunities, educators with targeted teaching aids and students with enhanced learning outcomes. For Australian documentary producers, the key to unlocking the potential of the education sector is engaging with its curriculum-based requirements at the earliest stages of pre-production. Two key mechanisms can lead to effective educational engagement; the established area of study guides produced in association with the Australian Teachers of Media (ATOM) and the emerging area of philanthropic funding coordinated by the Documentary Australia Foundation (DAF). DAF has acted as a key financial and cultural philanthropic bridge between individuals, foundations, corporations and the Australian documentary sector for over 14 years. DAF does not make or commission films but through management and receipt of grants and donations provides ‘expertise, information, guidance and resources to help each sector work together to achieve their goals’. The DAF application process also requires film-makers to detail their ‘Education and Outreach Strategy’ for each film with 582 films registered and 39 completed as of June 2014. These education strategies that can range from detailed to cursory efforts offer valuable insights into the Australian documentary sector's historical and current expectations of education as a receptive and dynamic audience for quality factual content. A recurring film-maker education strategy found in the DAF data is an engagement with ATOM to create a study guide for their film. This study guide then acts as a ‘bridging material’ between content and education audience. The frequency of this effort suggests these study guides enable greater educator engagement with content and increased interest and distribution of the film to educators. The paper Education paths for documentary distribution: DAF, ATOM and the study guides that bind them will address issues arising out of the changing needs of the education sector and the impact targeting ‘cineliteracy’ outcomes may have for Australian documentary distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Historically, the paper hand-held record (PHR) has been used for sharing information between hospital clinicians, general practitioners and pregnant women in a maternity shared-care environment. Recently in alignment with a National e-health agenda, an electronic health record (EHR) was introduced at an Australian tertiary maternity service to replace the PHR for collection and transfer of data. The aim of this study was to examine and compare the completeness of clinical data collected in a PHR and an EHR. Methods We undertook a comparative cohort design study to determine differences in completeness between data collected from maternity records in two phases. Phase 1 data were collected from the PHR and Phase 2 data from the EHR. Records were compared for completeness of best practice variables collected The primary outcome was the presence of best practice variables and the secondary outcomes were the differences in individual variables between the records. Results Ninety-four percent of paper medical charts were available in Phase 1 and 100% of records from an obstetric database in Phase 2. No PHR or EHR had a complete dataset of best practice variables. The variables with significant improvement in completeness of data documented in the EHR, compared with the PHR, were urine culture, glucose tolerance test, nuchal screening, morphology scans, folic acid advice, tobacco smoking, illicit drug assessment and domestic violence assessment (p = 0.001). Additionally the documentation of immunisations (pertussis, hepatitis B, varicella, fluvax) were markedly improved in the EHR (p = 0.001). The variables of blood pressure, proteinuria, blood group, antibody, rubella and syphilis status, showed no significant differences in completeness of recording. Conclusion This is the first paper to report on the comparison of clinical data collected on a PHR and EHR in a maternity shared-care setting. The use of an EHR demonstrated significant improvements to the collection of best practice variables. Additionally, the data in an EHR were more available to relevant clinical staff with the appropriate log-in and more easily retrieved than from the PHR. This study contributes to an under-researched area of determining data quality collected in patient records.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relative abundance data is common in the life sciences, but appreciation that it needs special analysis and interpretation is scarce. Correlation is popular as a statistical measure of pairwise association but should not be used on data that carry only relative information. Using timecourse yeast gene expression data, we show how correlation of relative abundances can lead to conclusions opposite to those drawn from absolute abundances, and that its value changes when different components are included in the analysis. Once all absolute information has been removed, only a subset of those associations will reliably endure in the remaining relative data, specifically, associations where pairs of values behave proportionally across observations. We propose a new statistic φ to describe the strength of proportionality between two variables and demonstrate how it can be straightforwardly used instead of correlation as the basis of familiar analyses and visualization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an efficient noniterative method for distribution state estimation using conditional multivariate complex Gaussian distribution (CMCGD). In the proposed method, the mean and standard deviation (SD) of the state variables is obtained in one step considering load uncertainties, measurement errors, and load correlations. In this method, first the bus voltages, branch currents, and injection currents are represented by MCGD using direct load flow and a linear transformation. Then, the mean and SD of bus voltages, or other states, are calculated using CMCGD and estimation of variance method. The mean and SD of pseudo measurements, as well as spatial correlations between pseudo measurements, are modeled based on the historical data for different levels of load duration curve. The proposed method can handle load uncertainties without using time-consuming approaches such as Monte Carlo. Simulation results of two case studies, six-bus, and a realistic 747-bus distribution network show the effectiveness of the proposed method in terms of speed, accuracy, and quality against the conventional approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the extensive use of rating systems in the web, and their significance in decision making process by users, the need for more accurate aggregation methods has emerged. The Naïve aggregation method, using the simple mean, is not adequate anymore in providing accurate reputation scores for items [6 ], hence, several researches where conducted in order to provide more accurate alternative aggregation methods. Most of the current reputation models do not consider the distribution of ratings across the different possible ratings values. In this paper, we propose a novel reputation model, which generates more accurate reputation scores for items by deploying the normal distribution over ratings. Experiments show promising results for our proposed model over state-of-the-art ones on sparse and dense datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species distribution models (SDMs) are considered to exemplify Pattern rather than Process based models of a species' response to its environment. Hence when used to map species distribution, the purpose of SDMs can be viewed as interpolation, since species response is measured at a few sites in the study region, and the aim is to interpolate species response at intermediate sites. Increasingly, however, SDMs are also being used to also extrapolate species-environment relationships beyond the limits of the study region as represented by the training data. Regardless of whether SDMs are to be used for interpolation or extrapolation, the debate over how to implement SDMs focusses on evaluating the quality of the SDM, both ecologically and mathematically. This paper proposes a framework that includes useful tools previously employed to address uncertainty in habitat modelling. Together with existing frameworks for addressing uncertainty more generally when modelling, we then outline how these existing tools help inform development of a broader framework for addressing uncertainty, specifically when building habitat models. As discussed earlier we focus on extrapolation rather than interpolation, where the emphasis on predictive performance is diluted by the concerns for robustness and ecological relevance. We are cognisant of the dangers of excessively propagating uncertainty. Thus, although the framework provides a smorgasbord of approaches, it is intended that the exact menu selected for a particular application, is small in size and targets the most important sources of uncertainty. We conclude with some guidance on a strategic approach to identifying these important sources of uncertainty. Whilst various aspects of uncertainty in SDMs have previously been addressed, either as the main aim of a study or as a necessary element of constructing SDMs, this is the first paper to provide a more holistic view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heritability of brain anatomical connectivity has been studied with diffusion-weighted imaging (DWI) mainly by modeling each voxel's diffusion pattern as a tensor (e.g., to compute fractional anisotropy), but this method cannot accurately represent the many crossing connections present in the brain. We hypothesized that different brain networks (i.e., their component fibers) might have different heritability and we investigated brain connectivity using High Angular Resolution Diffusion Imaging (HARDI) in a cohort of twins comprising 328 subjects that included 70 pairs of monozygotic and 91 pairs of dizygotic twins. Water diffusion was modeled in each voxel with a Fiber Orientation Distribution (FOD) function to study heritability for multiple fiber orientations in each voxel. Precision was estimated in a test-retest experiment on a sub-cohort of 39 subjects. This was taken into account when computing heritability of FOD peaks using an ACE model on the monozygotic and dizygotic twins. Our results confirmed the overall heritability of the major white matter tracts but also identified differences in heritability between connectivity networks. Inter-hemispheric connections tended to be more heritable than intra-hemispheric and cortico-spinal connections. The highly heritable tracts were found to connect particular cortical regions, such as medial frontal cortices, postcentral, paracentral gyri, and the right hippocampus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes part of an engineering study that was undertaken to demonstrate that a multi-megawatt Photovoltaic (PV) generation system could be connected to a rural 11 kV feeder without creating power quality issues for other consumers. The paper concentrates solely on the voltage regulation aspect of the study as this was the most innovative part of the study. The study was carried out using the time-domain software package, PSCAD/EMTDC. The software model included real time data input of actual measured load and scaled PV generation data, along with real-time substation voltage regulator and PV inverter reactive power control. The outputs from the model plot real-time voltage, current and power variations throughout the daily load and PV generation variations. Other aspects of the study not described in the paper include the analysis of harmonics, voltage flicker, power factor, voltage unbalance and system losses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distribution Revolution is a collection of interviews with leading film and TV professionals concerning the many ways that digital delivery systems are transforming the entertainment business. These interviews provide lively insider accounts from studio executives, distribution professionals, and creative talent of the tumultuous transformation of film and TV in the digital era. The first section features interviews with top executives at major Hollywood studios, providing a window into the big-picture concerns of media conglomerates with respect to changing business models, revenue streams, and audience behaviors. The second focuses on innovative enterprises that are providing path-breaking models for new modes of content creation, curation, and distribution—creatively meshing the strategies and practices of Hollywood and Silicon Valley. And the final section offers insights from creative talent whose professional practices, compensation, and everyday working conditions have been transformed over the past ten years. Taken together, these interviews demonstrate that virtually every aspect of the film and television businesses is being affected by the digital distribution revolution, a revolution that has likely just begun. Interviewees include: • Gary Newman, Chairman, 20th Century Fox Television • Kelly Summers, Former Vice President, Global Business Development and New Media Strategy, Walt Disney Studios • Thomas Gewecke, Chief Digital Officer and Executive Vice President, Strategy and Business Development, Warner Bros. Entertainment • Ted Sarandos, Chief Content Officer, Netflix • Felicia D. Henderson, Writer-Producer, Soul Food, Gossip Girl • Dick Wolf, Executive Producer and Creator, Law & Order

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.