938 resultados para k-Error linear complexity
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.
Resumo:
This paper presents experimental and numerical studies into the hydrodynamic loading of a bottom-hinged large buoyant flap held rigidly upright in waves. Possible applications and limitations of physical experiments, a linear potential analytical method, a linear potential numerical method, a weakly non-linear tool and RANS CFD simulations are discussed. Different domains of applicability of these research techniques are highlighted considering the validity of underlying assumptions, complexity of application and feasibility in terms of resources like time and computing power needed to obtain results. Conclusions are drawn regarding the future extension of the numerical methods to the case of a moving flap.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to accurate inferences. A transformation is also derived to reduce decision making in credal networks based on the maximality criterion to updating. The decision task is proved to have the same complexity of standard inference, being NPPP-complete for general credal nets and NP-complete for polytrees. Similar results are derived for the E-admissibility criterion. Numerical experiments confirm a good performance of the method.
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E3) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤−0.75 diopters (D), high myopia ≤−6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4–30.9], high myopia 2.7 % (95 % CI 2.69–2.73), hyperopia 25.2 % (95 % CI 25.0–25.4) and astigmatism 23.9 % (95 % CI 23.7–24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8–52.5) in 25–29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.
Resumo:
We propose a mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber line. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Furthermore, our proposed method outperforms the minimum mean-squared error design for a range of time domain equalizer (TEQ) filter lengths. The improved performance outweighs the small increase in computational complexity required. A block variant of our proposed algorithm is also presented to overcome the increased latency imposed on the feedback path of the adaptive system.
Resumo:
As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
Over 1 million km2 of seafloor experience permanent low-oxygen conditions within oxygen minimum zones (OMZs). OMZs are predicted to grow as a consequence of climate change, potentially affecting oceanic biogeochemical cycles. The Arabian Sea OMZ impinges upon the western Indian continental margin at bathyal depths (150 - 1500 m) producing a strong depth dependent oxygen gradient at the sea floor. The influence of the OMZ upon the short term processing of organic matter by sediment ecosystems was investigated using in situ stable isotope pulse chase experiments. These deployed doses of 13C:15N labeled organic matter onto the sediment surface at four stations from across the OMZ (water depth 540 - 1100 m; [O2] = 0.35 - 15 μM). In order to prevent experimentally anoxia, the mesocosms were not sealed. 13C and 15N labels were traced into sediment, bacteria, fauna and 13C into sediment porewater DIC and DOC. However, the DIC and DOC flux to the water column could not be measured, limiting our capacity to obtain mass-balance for C in each experimental mesocosm. Linear Inverse Modeling (LIM) provides a method to obtain a mass-balanced model of carbon flow that integrates stable-isotope tracer data with community biomass and biogeochemical flux data from a range of sources. Here we present an adaptation of the LIM methodology used to investigate how ecosystem structure influenced carbon flow across the Indian margin OMZ. We demonstrate how oxygen conditions affect food-web complexity, affecting the linkages between the bacteria, foraminifera and metazoan fauna, and their contributions to benthic respiration. The food-web models demonstrate how changes in ecosystem complexity are associated with oxygen availability across the OMZ and allow us to obtain a complete carbon budget for the stationa where stable-isotope labelling experiments were conducted.
Resumo:
Most models of riverine eco-hydrology and biogeochemistry rely upon bulk parameterization of fluxes. However, the transport and retention of carbon and nutrients in headwater streams is strongly influenced by biofilms (surface-attached microbial communities), which results in strong feedbacks between stream hydrodynamics and biogeochemistry. Mechanistic understanding of the interactions between streambed biofilms and nutrient dynamics is lacking. Here we present experimental results linking microscale observations of biofilm community structure to the deposition and resuspension of clay-sized mineral particles in streams. Biofilms were grown in identical 3 m recirculating flumes over periods of 14-50 days. Fluorescent particles were introduced to each flume, and their deposition was traced over 30 minutes. Particle resuspension from the biofilms was then observed under an increased stream flow, mimicking a flood event. We quantified particle fluxes using flow cytometry and epifluorescence microscopy. We directly observed particle adhesion to the biofilm using a confocal laser scanning microscope. 3-D Optical Coherence Tomography was used to determine biofilm roughness, areal coverage and void space in each flume. These measurements allow us to link biofilm complexity to particle retention during both baseflow and floodflow. The results suggest that increased biofilm complexity favors deposition and retention of fine particles in streams.
Resumo:
PURPOSE: To investigate the variations in induction and repair of DNA damage along the proton path, after a previous report on the increasing biological effectiveness along clinically modulated 60-MeV proton beams.
METHODS AND MATERIALS: Human skin fibroblast (AG01522) cells were irradiated along a monoenergetic and a modulated spread-out Bragg peak (SOBP) proton beam used for treating ocular melanoma at the Douglas Cyclotron, Clatterbridge Centre for Oncology, Wirral, Liverpool, United Kingdom. The DNA damage response was studied using the 53BP1 foci formation assay. The linear energy transfer (LET) dependence was studied by irradiating the cells at depths corresponding to entrance, proximal, middle, and distal positions of SOBP and the entrance and peak position for the pristine beam.
RESULTS: A significant amount of persistent foci was observed at the distal end of the SOBP, suggesting complex residual DNA double-strand break damage induction corresponding to the highest LET values achievable by modulated proton beams. Unlike the directly irradiated, medium-sharing bystander cells did not show any significant increase in residual foci.
CONCLUSIONS: The DNA damage response along the proton beam path was similar to the response of X rays, confirming the low-LET quality of the proton exposure. However, at the distal end of SOBP our data indicate an increased complexity of DNA lesions and slower repair kinetics. A lack of significant induction of 53BP1 foci in the bystander cells suggests a minor role of cell signaling for DNA damage under these conditions.
Resumo:
A new battery modelling method is presented based on the simulation error minimization criterion rather than the conventional prediction error criterion. A new integrated optimization method to optimize the model parameters is proposed. This new method is validated on a set of Li ion battery test data, and the results confirm the advantages of the proposed method in terms of the model generalization performance and long-term prediction accuracy.
Resumo:
An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to very accurate inferences. The approach can also be specialized to classification with credal networks based on the maximality criterion. A complexity analysis for both the problem and the algorithm is reported together with numerical experiments, which confirm the good performance of the method. While the inner approximation produced by the algorithm gives rise to a classifier which might return a subset of the optimal class set, preliminary empirical results suggest that the accuracy of the optimal class set is seldom affected by the approximate probabilities