130 resultados para average complexity
Resumo:
The development of high performance, low computational complexity detection algorithms is a key challenge for real-time Multiple-Input Multiple-Output (MIMO) communication system design. The Fixed-Complexity Sphere Decoder (FSD) algorithm is one of the most promising approaches, enabling quasi-ML decoding accuracy and high performance implementation due to its deterministic, highly parallel structure. However, it suffers from exponential growth in computational complexity as the number of MIMO transmit antennas increases, critically limiting its scalability to larger MIMO system topologies. In this paper, we present a solution to this problem by applying a novel cutting protocol to the decoding tree of a real-valued FSD algorithm. The new Real-valued Fixed-Complexity Sphere Decoder (RFSD) algorithm derived achieves similar quasi-ML decoding performance as FSD, but with an average 70% reduction in computational complexity, as we demonstrate from both theoretical and implementation perspectives for Quadrature Amplitude Modulation (QAM)-MIMO systems.
Resumo:
Orthogonal frequency division multiplexing (OFDM) requires an expensive linear amplifier at the transmitter due to its high peak-to-average power ratio (PAPR). Single carrier with cyclic prefix (SC-CP) is a closely related transmission scheme that possesses most of the benefits of OFDM but does not have the PAPR problem. Although in a multipath environment, SC-CP is very robust to frequency-selective fading, it is sensitive to the time-selective fading characteristics of the wireless channel that disturbs the orthogonality of the channel matrix (CM) and increases the computational complexity of the receiver. In this paper, we propose a time-domain low-complexity iterative algorithm to compensate for the effects of time selectivity of the channel that exploits the sparsity present in the channel convolution matrix. Simulation results show the superior performance of the proposed algorithm over the standard linear minimum mean-square error (L-MMSE) equalizer for SC-CP.
Resumo:
How best to predict the effects of perturbations to ecological communities has been a long-standing goal for both applied and basic ecology. This quest has recently been revived by new empirical data, new analysis methods, and increased computing speed, with the promise that ecologically important insights may be obtainable from a limited knowledge of community interactions. We use empirically based and simulated networks of varying size and connectance to assess two limitations to predicting perturbation responses in multispecies communities: (1) the inaccuracy by which species interaction strengths are empirically quantified and (2) the indeterminacy of species responses due to indirect effects associated with network size and structure. We find that even modest levels of species richness and connectance (similar to 25 pairwise interactions) impose high requirements for interaction strength estimates because system indeterminacy rapidly overwhelms predictive insights. Nevertheless, even poorly estimated interaction strengths provide greater average predictive certainty than an approach that uses only the sign of each interaction. Our simulations provide guidance in dealing with the trade-offs involved in maximizing the utility of network approaches for predicting dynamics in multispecies communities.
Resumo:
In this paper, a low complexity system for spectral analysis of heart rate variability (HRV) is presented. The main idea of the proposed approach is the implementation of the Fast-Lomb periodogram that is a ubiquitous tool in spectral analysis, using a wavelet based Fast Fourier transform. Interestingly we show that the proposed approach enables the classification of processed data into more and less significant based on their contribution to output quality. Based on such a classification a percentage of less-significant data is being pruned leading to a significant reduction of algorithmic complexity with minimal quality degradation. Indeed, our results indicate that the proposed system can achieve up-to 45% reduction in number of computations with only 4.9% average error in the output quality compared to a conventional FFT based HRV system.
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
This letter analyzes the performance of a low complexity detection scheme for a multi-carrier index keying (MCIK) with orthogonal frequency division multiplexing (OFDM) system over two-wave with diffused power (TWDP) fading channels. A closed-form expression for the average pairwise error probability (PEP) over TWDP fading channels is derived. This expression is used to analyze the performance of MCIK-OFDM in moderate, severe and extreme fading conditions. The presented results provide an insight on the performance of MCIK-OFDM for wireless communication systems that operate in enclosed metallic structures such as in-vehicular device-to-device (D2D) wireless networks.
Resumo:
Potential explanatory variables often co-vary in studies of species richness. Where topography varies within a survey it is difficult to separate area and habitat-diversity effects. Topographically complex surfaces may contain more species due to increased habitat diversity or as a result of increased area per se. Fractal geometry can be used to adjust species richness estimates to control for increases in area on complex surfaces. Application of fractal techniques to a survey of rocky shores demonstrated an unambiguous area-independent effect of topography on species richness in the Isle of Man. In contrast, variation in species richness in south-west England reflected surface availability alone. Multivariate tests and variation in limpet abundances also demonstrated regional variation in the area-independent effects of topography. Community composition did not vary with increasing surface complexity in south-west England. These results suggest large-scale gradients in the effects of heterogeneity on community processes or demography.
Resumo:
Purpose The aim of this paper is to explore the issues involved in developing and applying performance management approaches within a large UK public sector department using a multiple stakeholder perspective and an accompanying theoretical framework. Design/methodology/approach An initial short questionnaire was used to determine perceptions about the implementation and effectiveness of the new performance management system across the organisation. In total, 700 questionnaires were distributed. Running concurrently with an ethnographic approach, and informed by the questionnaire responses, was a series of semi-structured interviews and focus groups. Findings Staff at all levels had an understanding of the new system and perceived it as being beneficial. However, there were concerns that the approach was not continuously managed throughout the year and was in danger of becoming an annual event, rather than an ongoing process. Furthermore, the change process seemed to have advanced without corresponding changes to appraisal and reward and recognition systems. Thus, the business objectives were not aligned with motivating factors within the organisation. Research limitations/implications Additional research to test the validity and usefulness of the theoretical model, as discussed in this paper, would be beneficial. Practical implications The strategic integration of the stakeholder performance measures and scorecards was found to be essential to producing an overall stakeholder-driven strategy within the case study organisation. Originality/value This paper discusses in detail the approach adopted and the progress made by one large UK public sector organisation, as it attempts to develop better relationships with all of its stakeholders and hence improve its performance. This paper provides a concerted attempt to link theory with practice.
Resumo:
The work presented is concerned with the estimation of manufacturing cost at the concept design stage, when little technical information is readily available. The work focuses on the nose cowl sections of a wide range of engine nacelles built at Bombardier Aerospace Shorts of Belfast. A core methodology is presented that: defines manufacturing cost elements that are prominent; utilises technical parameters that are highly influential in generating those costs; establishes the linkage between these two; and builds the associated cost estimating relations into models. The methodology is readily adapted to deal with both the early and more mature conceptual design phases, which thereby highlights the generic, flexible and fundamental nature of the method. The early concept cost model simplifies cost as a cumulative element that can be estimated using higher level complexity ratings, while the mature concept cost model breaks manufacturing cost down into a number of constituents that are each driven by their own specific drivers. Both methodologies have an average error of less that ten percent when correlated with actual findings, thus achieving an acceptable level of accuracy. By way of validity and application, the research is firmly based on industrial case studies and practice and addresses the integration of design and manufacture through cost. The main contribution of the paper is the cost modelling methodology. The elemental modelling of the cost breakdown structure through materials, part fabrication, assembly and their associated drivers is relevant to the analytical design procedure, as it utilises design definition and complexity that is understood by engineers.
Resumo:
We use a simple average-atom model (NIMP) to calculate the distribution of ionization in a photoionization-dominated plasma, for comparison with recent experimental measurements undertaken on the Z-machine at the Sandia National Laboratory. The agreement between theory and experiment is found to be as good for calculations with an average-atom model as for those generated by more detailed models.