986 resultados para sample complexity
Resumo:
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w = -1.120+0.360-0.206(Stat)+0.269-0.291(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H0 measurements, we find w = -1.166+0.072-0.069 and Ωm = 0.280+0.013-0.012 (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = -1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: wBAO+H0+SN+WMAP = -1.124+0.083-0.065.
Resumo:
Quantile normalization (QN) is a technique for microarray data processing and is the default normalization method in the Robust Multi-array Average (RMA) procedure, which was primarily designed for analysing gene expression data from Affymetrix arrays. Given the abundance of Affymetrix microarrays and the popularity of the RMA method, it is crucially important that the normalization procedure is applied appropriately. In this study we carried out simulation experiments and also analysed real microarray data to investigate the suitability of RMA when it is applied to dataset with different groups of biological samples. From our experiments, we showed that RMA with QN does not preserve the biological signal included in each group, but rather it would mix the signals between the groups. We also showed that the Median Polish method in the summarization step of RMA has similar mixing effect. RMA is one of the most widely used methods in microarray data processing and has been applied to a vast volume of data in biomedical research. The problematic behaviour of this method suggests that previous studies employing RMA could have been misadvised or adversely affected. Therefore we think it is crucially important that the research community recognizes the issue and starts to address it. The two core elements of the RMA method, quantile normalization and Median Polish, both have the undesirable effects of mixing biological signals between different sample groups, which can be detrimental to drawing valid biological conclusions and to any subsequent analyses. Based on the evidence presented here and that in the literature, we recommend exercising caution when using RMA as a method of processing microarray gene expression data, particularly in situations where there are likely to be unknown subgroups of samples.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
Most models of riverine eco-hydrology and biogeochemistry rely upon bulk parameterization of fluxes. However, the transport and retention of carbon and nutrients in headwater streams is strongly influenced by biofilms (surface-attached microbial communities), which results in strong feedbacks between stream hydrodynamics and biogeochemistry. Mechanistic understanding of the interactions between streambed biofilms and nutrient dynamics is lacking. Here we present experimental results linking microscale observations of biofilm community structure to the deposition and resuspension of clay-sized mineral particles in streams. Biofilms were grown in identical 3 m recirculating flumes over periods of 14-50 days. Fluorescent particles were introduced to each flume, and their deposition was traced over 30 minutes. Particle resuspension from the biofilms was then observed under an increased stream flow, mimicking a flood event. We quantified particle fluxes using flow cytometry and epifluorescence microscopy. We directly observed particle adhesion to the biofilm using a confocal laser scanning microscope. 3-D Optical Coherence Tomography was used to determine biofilm roughness, areal coverage and void space in each flume. These measurements allow us to link biofilm complexity to particle retention during both baseflow and floodflow. The results suggest that increased biofilm complexity favors deposition and retention of fine particles in streams.
Resumo:
Prescribing tasks, which involve pharmacological knowledge, clinical decision-making and practical skill, take place within unpredictable social environments and involve interactions within and between endlessly changing health care teams. Despite this, curriculum designers commonly assume them to be simple to learn and perform. This research used mixed methods to explore how undergraduate medical students learn to prescribe in the 'real world'. It was informed by cognitive psychology, sociocultural theory, and systems thinking. We found that learning to prescribe occurs as a dynamic series of socially negotiated interactions within and between individuals, communities and environments. As well as a thematic analysis, we developed a framework of three conceptual spaces in which learning opportunities for prescribing occur. This illustrates a complex systems view of prescribing education and defines three major system components: the "social space", where the environmental conditions influence or bring about a learning experience; the "process space", describing what happens during the learning experience; and the intra-personal "cognitive space", where the learner may develop aspects of prescribing expertise. This conceptualisation broadens the scope of inquiry of prescribing education research by highlighting the complex interplay between individual and social dimensions of learning. This perspective is also likely to be relevant to students' learning of other clinical competencies.
Resumo:
OBJECTIVE: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery.
METHODS: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared.
RESULTS: For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01).
CONCLUSION: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality.
ADVANCES IN KNOWLEDGE: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.
Resumo:
Pre-processing (PP) of received symbol vector and channel matrices is an essential pre-requisite operation for Sphere Decoder (SD)-based detection of Multiple-Input Multiple-Output (MIMO) wireless systems. PP is a highly complex operation, but relative to the total SD workload it represents a relatively small fraction of the overall computational cost of detecting an OFDM MIMO frame in standards such as 802.11n. Despite this, real-time PP architectures are highly inefficient, dominating the resource cost of real-time SD architectures. This paper resolves this issue. By reorganising the ordering and QR decomposition sub operations of PP, we describe a Field Programmable Gate Array (FPGA)-based PP architecture for the Fixed Complexity Sphere Decoder (FSD) applied to 4 × 4 802.11n MIMO which reduces resource cost by 50% as compared to state-of-the-art solutions whilst maintaining real-time performance.
Resumo:
This work proposes an extended version of the well-known tree-augmented naive Bayes (TAN) classifier where the structure learning step is performed without requiring features to be connected to the class. Based on a modification of Edmonds' algorithm, our structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way (quadratic in the number of features, the same complexity as learning TANs). We enhance our procedure with a new score function that only takes into account arcs that are relevant to predict the class, as well as an optimization over the equivalent sample size during learning. These ideas may be useful for structure learning of Bayesian networks in general. A range of experiments shows that we obtain models with better prediction accuracy than naive Bayes and TAN, and comparable to the accuracy of the state-of-the-art classifier averaged one-dependence estimator (AODE). We release our implementation of ETAN so that it can be easily installed and run within Weka.
Resumo:
We study the computational complexity of finding maximum a posteriori configurations in Bayesian networks whose probabilities are specified by logical formulas. This approach leads to a fine grained study in which local information such as context-sensitive independence and determinism can be considered. It also allows us to characterize more precisely the jump from tractability to NP-hardness and beyond, and to consider the complexity introduced by evidence alone.
Resumo:
To value something, you first have to know what it is. Bartkowski et al. (2015) reveal a critical weakness: that biodiversity has rarely, if ever, been defined in economic valuations of putative biodiversity. Here we argue that a precise definition is available and could help focus valuation studies, but that in using this scientific definition (a three-dimensional measure of total difference), valuation by stated-preference methods becomes, at best, very difficult.We reclassify the valuation studies reviewed by Bartkowski et al. (2015) to better reflect the biological definition of biodiversity and its potential indirect use value as the support for provisioning and regulating services. Our analysis shows that almost all of the studies reviewed by Bartkowski et al. (2015) were not about biodiversity, but rather were about the 'vague notion' of naturalness, or sometimes a specific biological component of diversity. Alternative economic methods should be found to value biodiversity as it is defined in natural science. We suggest options based on a production function analogy or cost-based methods. Particularly the first of these provides a strong link between economic theory and ecological research and is empirically practical. Since applied science emphasizes a scientific definition of biodiversity in the design and justification of conservation plans, the need for economic valuation of this quantitative meaning of biodiversity is considerable and as yet unfulfilled.
Resumo:
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called ‘reduced complexity’ models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed ‘appropriate complexity modelling’ of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
Resumo:
This letter analyzes the performance of a low complexity detection scheme for a multi-carrier index keying (MCIK) with orthogonal frequency division multiplexing (OFDM) system over two-wave with diffused power (TWDP) fading channels. A closed-form expression for the average pairwise error probability (PEP) over TWDP fading channels is derived. This expression is used to analyze the performance of MCIK-OFDM in moderate, severe and extreme fading conditions. The presented results provide an insight on the performance of MCIK-OFDM for wireless communication systems that operate in enclosed metallic structures such as in-vehicular device-to-device (D2D) wireless networks.