43 resultados para fuzzy based evaluation method
Resumo:
PLFC is a first-order possibilistic logic dealing with fuzzy constants and fuzzily restricted quantifiers. The refutation proof method in PLFC is mainly based on a generalized resolution rule which allows an implicit graded unification among fuzzy constants. However, unification for precise object constants is classical. In order to use PLFC for similarity-based reasoning, in this paper we extend a Horn-rule sublogic of PLFC with similarity-based unification of object constants. The Horn-rule sublogic of PLFC we consider deals only with disjunctive fuzzy constants and it is equipped with a simple and efficient version of PLFC proof method. At the semantic level, it is extended by equipping each sort with a fuzzy similarity relation, and at the syntactic level, by fuzzily “enlarging” each non-fuzzy object constant in the antecedent of a Horn-rule by means of a fuzzy similarity relation.
Resumo:
After a historical survey of temperament in Bach’s Well-Tempered Clavier by Johann Sebastian Bach, an analysis of the work has been made by applying a number of historical good temperaments as well as some recent proposals. The results obtained show that the global dissonance for all preludes and fugues in major keys can be minimized using the Kirnberger II temperament. The method of analysis used for this research is based on the mathematical theories of sensory dissonance, which have been developed by authors such as Hermann Ludwig Ferdinand von Helmholtz, Harry Partch, Reinier Plomp, Willem J. M. Levelt and William A. Sethares
Resumo:
This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 differentcompositional datasets and modelled the first canonical variable using a segmented regression modelsolely based on an observation about the scatter plots. In this paper, multiple linear regressions areapplied to different datasets to confirm the validity of our proposed model. In addition to dating theunknown tephras by calibration as discussed previously, another method of mapping the unknown tephrasinto samples of the reference set or missing samples in between consecutive reference samples isproposed. The application of these methodologies is demonstrated with both simulated and real datasets.This new proposed methodology provides an alternative, more acceptable approach for geologists as theirfocus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age ofunknown tephra.Kew words: Tephrochronology; Segmented regression
Resumo:
Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.
Resumo:
Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D X-ray reconstruc-tion angiography (3DRA) and time of °ight magnetic resonance angiography (TOF-MRA) images available in the clinical routine.Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA and TOF-MRA. Images were obtained from two clinical centers, each using di®erent imaging equipment. Evaluation included qualitative and quantitative analyses ofthe segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: iso-intensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an inter-modality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE di®ered from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE, respectively) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatabilityof GAR was superior to manual measurements and ISE. The inter-modality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.
Resumo:
Collage is a pattern-based visual design authoring tool for the creation of collaborative learning scripts computationally modelled with IMS Learning Design (LD). The pattern-based visual approach aims to provide teachers with design ideas that are based on broadly accepted practices. Besides, it seeks hiding the LD notation so that teachers can easily create their own designs. The use of visual representations supports both the understanding of the design ideas and the usability of the authoring tool. This paper presents a multicase study comprising three different cases that evaluate the approach from different perspectives. The first case includes workshops where teachers use Collage. A second case implies the design of a scenario proposed by a third-party using related approaches. The third case analyzes a situation where students follow a design created with Collage. The cross-case analysis provides a global understanding of the possibilities and limitations of the pattern-based visual design approach.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
Spatial resolution is a key parameter of all remote sensing satellites and platforms. The nominal spatial resolution of satellites is a well-known characteristic because it is directly related to the area in ground that represents a pixel in the detector. Nevertheless, in practice, the actual resolution of a specific image obtained from a satellite is difficult to know precisely because it depends on many other factors such as atmospheric conditions. However, if one has two or more images of the same region, it is possible to compare their relative resolutions. In this paper, a wavelet-decomposition-based method for the determination of the relative resolution between two remotely sensed images of the same area is proposed. The method can be applied to panchromatic, multispectral, and mixed (one panchromatic and one multispectral) images. As an example, the method was applied to compute the relative resolution between SPOT-3, Landsat-5, and Landsat-7 panchromatic and multispectral images taken under similar as well as under very different conditions. On the other hand, if the true absolute resolution of one of the images of the pair is known, the resolution of the other can be computed. Thus, in the last part of this paper, a spatial calibrator that is designed and constructed to help compute the absolute resolution of a single remotely sensed image is described, and an example of its use is presented.
Resumo:
We present a method to detect patterns in defocused scenes by means of a joint transform correlator. We describe analytically the correlation plane, and we also introduce an original procedure to recognize the target by postprocessing the correlation plane. The performance of the methodology when the defocused images are corrupted by additive noise is also considered.
Resumo:
EEG recordings are usually corrupted by spurious extra-cerebral artifacts, which should be rejected or cleaned up by the practitioner. Since manual screening of human EEGs is inherently error prone and might induce experimental bias, automatic artifact detection is an issue of importance. Automatic artifact detection is the best guarantee for objective and clean results. We present a new approach, based on the time–frequency shape of muscular artifacts, to achieve reliable and automatic scoring. The impact of muscular activity on the signal can be evaluated using this methodology by placing emphasis on the analysis of EEG activity. The method is used to discriminate evoked potentials from several types of recorded muscular artifacts—with a sensitivity of 98.8% and a specificity of 92.2%. Automatic cleaning ofEEGdata are then successfully realized using this method, combined with independent component analysis. The outcome of the automatic cleaning is then compared with the Slepian multitaper spectrum based technique introduced by Delorme et al (2007 Neuroimage 34 1443–9).
Resumo:
A new method for decision making that uses the ordered weighted averaging (OWA) operator in the aggregation of the information is presented. It is used a concept that it is known in the literature as the index of maximum and minimum level (IMAM). This index is based on distance measures and other techniques that are useful for decision making. By using the OWA operator in the IMAM, we form a new aggregation operator that we call the ordered weighted averaging index of maximum and minimum level (OWAIMAM) operator. The main advantage is that it provides a parameterized family of aggregation operators between the minimum and the maximum and a wide range of special cases. Then, the decision maker may take decisions according to his degree of optimism and considering ideals in the decision process. A further extension of this approach is presented by using hybrid averages and Choquet integrals. We also develop an application of the new approach in a multi-person decision-making problem regarding the selection of strategies.
Resumo:
Purpose: To assess the feasibility of a method based on microwave spectrometry to detect structural distortions of metallic stents in open air conditions and envisage the prospects of this approach toward possible medical applicability for the evaluation of implanted stents. Methods: Microwave absorbance spectra between 2.0 and 18.0 GHz were acquired in open air for the characterization of a set of commercial stents using a specifically design setup. Rotating each sample over 360º, 2D absorbance diagrams were generated as a function of frequency and rotation angle. To check our approach for detecting changes in stent length (fracture) and diameter (recoil), two specific tests were performed in open air. Finally, with a few adjustments, this same system provides 2D absorbance diagrams of stents immersed in a water-based phantom, this time over a bandwidth ranging from 0.2 to 1.8 GHz. Results: The authors show that metallic stents exhibit characteristic resonant frequencies in their microwave absorbance spectra in open air which depend on their length and, as a result, may reflect the occurrence of structural distortions. These resonances can be understood considering that such devices behave like dipole antennas in terms of microwave scattering. From fracture tests, the authors infer that microwave spectrometry provides signs of presence of Type I to Type IV stent fractures and allows in particular a quantitative evaluation of Type III and Type IV fractures. Recoil tests show that microwave spectrometry seems able to provide some quantitative assessment of diametrical shrinkage, but only if it involves longitudinal shortening. Finally, the authors observe that the resonant frequencies of stents placed inside the phantom shift down with respect to the corresponding open air frequencies, as it should be expected considering the increase of dielectric permittivity from air to water. Conclusions: The evaluation of stent resonant frequencies provided by microwave spectrometry allows detection and some quantitative assessment of stent fracture and recoil in open air conditions. Resonances of stents immersed in water can be also detected and their characteristic frequencies are in good agreement with theoretical estimates. Although these are promising results, further verifica tion in a more relevant phantom is required in order to foresee the real potential of this approach.