989 resultados para STATISTICAL COMPLEXITY


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate the first and second order characteristics of the received signal at the output ofhypothetical selection, equal gain and maximal ratio combiners which utilize spatially separated antennas at the basestation. Considering a range of human body movements, we model the model the small-scale fading characteristics ofthe signal using diversity specific analytical equations which take into account the number of available signal branchesat the receiver. It is shown that these equations provide an excellent fit to the measured channel data. Furthermore, formany hypothetical diversity receiver configurations, the Nakagami-m parameter was found to be close to 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a low complexity system for spectral analysis of heart rate variability (HRV) is presented. The main idea of the proposed approach is the implementation of the Fast-Lomb periodogram that is a ubiquitous tool in spectral analysis, using a wavelet based Fast Fourier transform. Interestingly we show that the proposed approach enables the classification of processed data into more and less significant based on their contribution to output quality. Based on such a classification a percentage of less-significant data is being pruned leading to a significant reduction of algorithmic complexity with minimal quality degradation. Indeed, our results indicate that the proposed system can achieve up-to 45% reduction in number of computations with only 4.9% average error in the output quality compared to a conventional FFT based HRV system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This presentation will explore the  role that social acceptance of onshore wind can play in understanding and progressing the low carbon transition in Europe. Although this is commonly perceived as arising simply from the overall level of renewable energy generated (and ‘dirty’ energy displaced), its significance goes well beyond this as it helps us understand some of the key issues facing the electricity sector as a social-technical system.  As such it is not only a matter of delivering the necessary infrastructure, but requires the long term mediation of complex multi-governmental arrangements involving a very wide range of actors. The interests of these actors engage hugely different timescales, geographic scales of concern and rationalities that make the arena of social acceptance a cauldron of complexity, mediating between overlapping and incompatible concerns. The presentation will briefly review the nature of some of these relationships and discuss what this means for how we conceive and act on the social acceptance of wind, and what this means for the long term low carbon transition

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives

A P-value <0.05 is one metric used to evaluate the results of a randomized controlled trial (RCT). We wondered how often statistically significant results in RCTs may be lost with small changes in the numbers of outcomes.

Study Design and Setting

A review of RCTs in high-impact medical journals that reported a statistically significant result for at least one dichotomous or time-to-event outcome in the abstract. In the group with the smallest number of events, we changed the status of patients without an event to an event until the P-value exceeded 0.05. We labeled this number the Fragility Index; smaller numbers indicated a more fragile result.

Results

The 399 eligible trials had a median sample size of 682 patients (range: 15-112,604) and a median of 112 events (range: 8-5,142); 53% reported a P-value <0.01. The median Fragility Index was 8 (range: 0-109); 25% had a Fragility Index of 3 or less. In 53% of trials, the Fragility Index was less than the number of patients lost to follow-up.

Conclusion

The statistically significant results of many RCTs hinge on small numbers of events. The Fragility Index complements the P-value and helps identify less robust results. 

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we first provide a theoretical validation for a low-complexity transmit diversity algorithm which employs only one RF chain and a low-complexity switch for transmission. Our theoretical analysis is compared to the simulation results and proved to be accurate. We then apply the transmit diversity scheme to multiple-input and multiple-output (MIMO) systems with bit-interleaved coded modulation (BICM). © 2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The adulteration of extra virgin olive oil with other vegetable oils is a certain problem with economic and health consequences. Current official methods have been proved insufficient to detect such adulterations. One of the most concerning and undetectable adulterations with other vegetable oils is the addition of hazelnut oil. The main objective of this work was to develop a novel dimensionality reduction technique able to model oil mixtures as a part of an integrated pattern recognition solution. This final solution attempts to identify hazelnut oil adulterants in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. The proposed Continuous Locality Preserving Projections (CLPP) technique allows the modelling of the continuous nature of the produced in house admixtures as data series instead of discrete points. This methodology has potential to be extended to other mixtures and adulterations of food products. The maintenance of the continuous structure of the data manifold lets the better visualization of this examined classification problem and facilitates a more accurate utilisation of the manifold for detecting the adulterants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&amp;N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&amp;N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p &lt; 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p &lt; 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R &gt; 0.57, p &lt; 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process. 

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, wide-field sky surveys providing deep multi-band imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence of a sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for supernova cosmology, offering a standardizable candle good to an intrinsic scatter of 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light curve properties and an expanded grid of progenitor properties, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.