974 resultados para weighted method
Resumo:
Incorporating further information into the ordered weighted averaging (OWA) operator weights is investigated in this paper. We first prove that for a constant orness the minimax disparity model [13] has unique optimal solution while the modified minimax disparity model [16] has alternative optimal OWA weights. Multiple optimal solutions in modified minimax disparity model provide us opportunity to define a parametric aggregation OWA which gives flexibility to decision makers in the process of aggregation and selecting the best alternative. Finally, the usefulness of the proposed parametric aggregation method is illustrated with an application in metasearch engine. © 2011 Elsevier Inc. All rights reserved.
Resumo:
A significant change of scene in a gradually changing scene is detected with the aid of a least one camera means for capturing digital images of the scene. A current image of the scene is formed together with a present weighted reference image which is formed from a plurality of previous images of the scene. Cell data is established based on the current image and the present weighted reference image. The cell data is statistically analysed so as to be able to identify at least one difference corresponding to a significant change of scene. When identified, an indication of such significant change of scene is provided.
Resumo:
An iterative method for reconstruction of the solution to a parabolic initial boundary value problem of second order from Cauchy data is presented. The data are given on a part of the boundary. At each iteration step, a series of well-posed mixed boundary value problems are solved for the parabolic operator and its adjoint. The convergence proof of this method in a weighted L2-space is included.
Resumo:
An iterative method for the reconstruction of a stationary three-dimensional temperature field, from Cauchy data given on a part of the boundary, is presented. At each iteration step, a series of mixed well-posed boundary value problems are solved for the heat operator and its adjoint. A convergence proof of this method in a weighted L 2-space is include
Resumo:
This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.
Resumo:
In this work the new pattern recognition method based on the unification of algebraic and statistical approaches is described. The main point of the method is the voting procedure upon the statistically weighted regularities, which are linear separators in two-dimensional projections of feature space. The report contains brief description of the theoretical foundations of the method, description of its software realization and the results of series of experiments proving its usefulness in practical tasks.
Resumo:
ACM Computing Classification System (1998): I.2.8, I.2.10, I.5.1, J.2.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
2000 Mathematics Subject Classification: 46B70, 41A25, 41A17, 26D10. ∗Part of the results were reported at the Conference “Pioneers of Bulgarian Mathematics”, Sofia, 2006.
Resumo:
The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. It provides with ratio-scale measurements of the prioirities of elements on the various leveles of a hierarchy. These priorities are obtained through the pairwise comparisons of elements on one level with reference to each element on the immediate higher level. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM), Logarithmic Least Squares Method (LLSM), Weighted Least Squares Method (WLSM) and Chi Squares Method (X2M) are of the tools for computing the priorities of the alternatives. This paper studies a method for generating all the solutions of the LSM problems for 3 × 3 matrices. We observe non-uniqueness and rank reversals by presenting numerical results.
Resumo:
Purpose: There are two goals of this study. The first goal of this study is to investigate the feasibility of using classic textural feature extraction in radiotherapy response assessment among a unique cohort of early stage breast cancer patients who received the single-dose preoperative radiotherapy. The second goal of this study is to investigate the clinical feasibility of using classic texture features as potential biomarkers which are supplementary to regional apparent diffusion coefficient in gynecological cancer radiotherapy response assessment.
Methods and Materials: For the breast cancer study, 15 patients with early stage breast cancer were enrolled in this retrospective study. Each patient received a single-fraction radiation treatment, and DWI and DCE-MRI scans were conducted before and after the radiotherapy. DWI scans were acquired using a spin-echo EPI sequence with diffusion weighting factors of b = 0 and b = 500 mm2/s, and the apparent diffusion coefficient (ADC) maps were calculated. DCE-MRI scans were acquired using a T1-weighted 3D SPGR sequence with a temporal resolution of about 1 minute. The contrast agent (CA) was intravenously injected with a 0.1 mmol/kg bodyweight dose at 2 ml/s. Two parameters, volume transfer constant (Ktrans) and kep were analyzed using the two-compartment Tofts pharmacokinetic model. For pharmacokinetic parametric maps and ADC maps, 33 textural features were generated from the clinical target volume (CTV) in a 3D fashion using the classic gray level co-occurrence matrix (GLCOM) and gray level run length matrix (GLRLM). Wilcoxon signed-rank test was used to determine the significance of each texture feature’s change after the radiotherapy. The significance was set to 0.05 with Bonferroni correction.
For the gynecological cancer study, 12 female patients with gynecologic cancer treated with fractionated external beam radiotherapy (EBRT) combined with high dose rate (HDR) intracavitary brachytherapy were studied. Each patient first received EBRT treatment followed by five fractions of HDR treatment. Before EBRT and before each fraction of brachytherapy, Diffusion Weighted MRI (DWI-MRI) and CT scans were acquired. DWI scans were acquired in sagittal plane utilizing a spin-echo echo-planar imaging sequence with weighting factors of b = 500 s/mm2 and b = 1000 s/mm2, one set of images of b = 0 s/mm2 were also acquired. ADC maps were calculated using linear least-square fitting method. Distributed diffusion coefficient (DDC) maps and stretching parameter α were calculated. For ADC and DDC maps, 33 classic texture features were generated utilizing the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from high-risk clinical target volume (HR-CTV). Wilcoxon signed-rank statistics test was applied to determine the significance of each feature’s numerical value change after radiotherapy. Significance level was set to 0.05 with multi-comparison correction if applicable.
Results: For the breast cancer study, regarding ADC maps calculated from DWI-MRI, 24 out of 33 CTV features changed significantly after the radiotherapy. For DCE-MRI pharmacokinetic parameters, all 33 CTV features of Ktrans and 33 features of kep changed significantly.
For the gynecological cancer study, regarding ADC maps, 28 out of 33 HR-CTV texture features showed significant changes after the EBRT treatment. 28 out of 33 HR-CTV texture features indicated significant changes after HDR treatments. The texture features that indicated significant changes after HDR treatments are the same as those after EBRT treatment. 28 out of 33 HR-CTV texture features showed significant changes after whole radiotherapy treatment process. The texture features that indicated significant changes for the whole treatment process are the same as those after HDR treatments.
Conclusion: Initial results indicate that certain classic texture features are sensitive to radiation-induced changes. Classic texture features with significant numerical changes can be used in monitoring radiotherapy effect. This might suggest that certain texture features might be used as biomarkers which are supplementary to ADC and DDC for assessment of radiotherapy response in breast cancer and gynecological cancer.
Resumo:
Modelling of massive stars and supernovae (SNe) plays a crucial role in understanding galaxies. From this modelling we can derive fundamental constraints on stellar evolution, mass-loss processes, mixing, and the products of nucleosynthesis. Proper account must be taken of all important processes that populate and depopulate the levels (collisional excitation, de-excitation, ionization, recombination, photoionization, bound–bound processes). For the analysis of Type Ia SNe and core collapse SNe (Types Ib, Ic and II) Fe group elements are particularly important. Unfortunately little data is currently available and most noticeably absent are the photoionization cross-sections for the Fe-peaks which have high abundances in SNe. Important interactions for both photoionization and electron-impact excitation are calculated using the relativistic Dirac atomic R-matrix codes (DARC) for low-ionization stages of Cobalt. All results are calculated up to photon energies of 45 eV and electron energies up to 20 eV. The wavefunction representation of Co III has been generated using GRASP0 by including the dominant 3d7, 3d6[4s, 4p], 3p43d9 and 3p63d9 configurations, resulting in 292 fine structure levels. Electron-impact collision strengths and Maxwellian averaged effective collision strengths across a wide range of astrophysically relevant temperatures are computed for Co III. In addition, statistically weighted level-resolved ground and metastable photoionization cross-sections are presented for Co II and compared directly with existing work.
Resumo:
Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.
Resumo:
This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.