962 resultados para Direct digital detector images


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method to automatically segment red blood cells (RBCs) visualized by digital holographic microscopy (DHM), which is based on the marker-controlled watershed algorithm. Quantitative phase images of RBCs can be obtained by using off-axis DHM along to provide some important information about each RBC, including size, shape, volume, hemoglobin content, etc. The most important process of segmentation based on marker-controlled watershed is to perform an accurate localization of internal and external markers. Here, we first obtain the binary image via Otsu algorithm. Then, we apply morphological operations to the binary image to get the internal markers. We then apply the distance transform algorithm combined with the watershed algorithm to generate external markers based on internal markers. Finally, combining the internal and external markers, we modify the original gradient image and apply the watershed algorithm. By appropriately identifying the internal and external markers, the problems of oversegmentation and undersegmentation are avoided. Furthermore, the internal and external parts of the RBCs phase image can also be segmented by using the marker-controlled watershed combined with our method, which can identify the internal and external markers appropriately. Our experimental results show that the proposed method achieves good performance in terms of segmenting RBCs and could thus be helpful when combined with an automated classification of RBCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an advanced technique for the generation of deformation maps using synthetic aperture radar (SAR) data is presented. The algorithm estimates the linear and nonlinear components of the displacement, the error of the digital elevation model (DEM) used to cancel the topographic terms, and the atmospheric artifacts from a reduced set of low spatial resolution interferograms. The pixel candidates are selected from those presenting a good coherence level in the whole set of interferograms and the resulting nonuniform mesh tessellated with the Delauney triangulation to establish connections among them. The linear component of movement and DEM error are estimated adjusting a linear model to the data only on the connections. Later on, this information, once unwrapped to retrieve the absolute values, is used to calculate the nonlinear component of movement and atmospheric artifacts with alternate filtering techniques in both the temporal and spatial domains. The method presents high flexibility with respect to the required number of images and the baselines length. However, better results are obtained with large datasets of short baseline interferograms. The technique has been tested with European Remote Sensing SAR data from an area of Catalonia (Spain) and validated with on-field precise leveling measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectif de ce travail est le développement d'une méthode de caractérisation objective de la qualité d'image s'appliquant à des systèmes de mammographie analogique, utilisant un couple écran-film comme détecteur, et numérique, basé sur une technologie semi-conductrice, ceci en vue de la comparaison de leurs performances. La méthode développée tient compte de la gamme dynamique du détecteur, de la détectabilité de structures de haut contraste, simulant des microcalcifications, et de structures de bas contraste, simulant des opacités (nodules tumoraux). La méthode prend également en considération le processus de visualisation de l'image, ainsi que la réponse de l'observateur. Pour réaliser ceci, un objet-test ayant des propriétés proches de celles d'un sein comprimé, composé de différents matériaux équivalents aux tissus, allant du glandulaire à l'adipeux, et comprenant des zones permettant la simulation de structures de haut et bas contraste, ainsi que la mesure de la résolution et celle du bruit, a été développé et testé. L'intégration du processus de visualisation a été réalisée en utilisant une caméra CCD mesurant directement les paramètres de qualité d'image, à partir de l'image de l'objet-test, dans une grandeur physique commune au système numérique et analogique, à savoir la luminance arrivant sur l'oeil de l'observateur. L'utilisation d'une grandeur synthétique intégrant dans un même temps, le contraste, le bruit et la résolution rend possible une comparaison objective entre les deux systèmes de mammographie. Un modèle mathématique, simulant la réponse d'un observateur et intégrant les paramètres de base de qualité d'image, a été utilisé pour calculer la détectabilité de structures de haut et bas contraste en fonction du type de tissu sur lequel celles-ci se trouvent. Les résultats obtenus montrent qu'à dose égale la détectabilité des structures est significativement plus élevée avec le système de mammographie numérique qu'avec le système analogique. Ceci est principalement lié au fait que le bruit du système numérique est plus faible que celui du système analogique. Les résultats montrent également que la méthodologie, visant à comparer des systèmes d'imagerie numérique et analogique en utilisant un objet-test à large gamme dynamique ainsi qu'une caméra, peut être appliquée à d'autres modalités radiologiques, ainsi qu'à une démarche d'optimisation des conditions de lecture des images.<br/><br/>The goal of this work was to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality and patient dose. We propose a method that takes into account the dynamic range of the image detector and the detection of high contrast (for microcalcifications) and low contrast (for masses or tumoral nodules) structures. The method also addresses the problems of image visualization and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution, and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualization process in the image quality assessment. In this way the luminance reaching the viewer?s eyes can be controlled for both kinds of images. A global quantity describing image contrast, spatial resolution and noise, and expressed in terms of luminance at the camera, can then be used to compare the two technologies objectively. The quantity used was a mathematical model observer that calculates the detectability of high and low contrast structures as a function of the background tissue. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. This is mainly because the image noise is lower for the digital system than for the screen-film detector. The method of using a test object with a large dynamic range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimize the process of radiographic film reading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to improve the management of copyright in the Internet, known as Digital Rights Management, there is the need for a shared language for copyright representation. Current approaches are based on purely syntactic solutions, i.e. a grammar that defines a rights expression language. These languages are difficult to put into practise due to the lack of explicit semantics that facilitate its implementation. Moreover, they are simple from the legal point of view because they are intended just to model the usage licenses granted by content providers to end-users. Thus, they ignore the copyright framework that lies behind and the whole value chain from creators to end-users. Our proposal is to use a semantic approach based on semantic web ontologies. We detail the development of a copyright ontology in order to put this approach into practice. It models the copyright core concepts for creation, rights and the basic kinds of actions that operate on content. Altogether, it allows building a copyright framework for the complete value chain. The set of actions operating on content are our smaller building blocks in order to cope with the complexity of copyright value chains and statements and, at the same time, guarantee a high level of interoperability and evolvability. The resulting copyright modelling framework is flexible and complete enough to model many copyright scenarios, not just those related to the economic exploitation of content. The ontology also includes moral rights, so it is possible to model this kind of situations as it is shown in the included example model for a withdrawal scenario. Finally, the ontology design and the selection of tools result in a straightforward implementation. Description Logic reasoners are used for license checking and retrieval. Rights are modelled as classes of actions, action patterns are modelled also as classes and the same is done for concrete actions. Then, to check if some right or license grants an action is reduced to check for class subsumption, which is a direct functionality of these reasoners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os parâmetros dosimétricos de um feixe de raios X de pequeno diâmetro para um sistema de radiocirurgia comercial foram medidos em água com um detector de diodo de Si do tipo p. As razões tecido-máximo, o fator de espalhamento total e os perfis dos feixes a profundidades de 5 e 10 cm foram medidos para 17 feixes de diâmetros circulares de 5 mm a 50 mm, em incrementos de 2,5 mm. Os fatores de espalhamento totais caíram lentamente, de 0,947 para 0,888 entre os cones de 50 mm e 12,5 mm de diâmetro (variação de 7%); para os cones entre 10 mm e 5 mm de diâmetro, esta queda foi bem maior, de 0,854 para 0,666 (variação de 28%). Os valores obtidos para a relação tecido-máximo são consistentes com dados publicados. Os perfis dos feixes foram medidos nas direções x e y, e estão dentro de 0,2 mm para todos os cones entre as duas direções. A medida da largura à meia-altura se encontra dentro de 1 mm com o diâmetro nominal dos cones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Na mamografia digital, os processos de aquisição da imagem, demonstração e armazenamento são separados, o que leva à otimização de cada uma dessas etapas. A radiação transmitida através da mama é absorvida por um detector eletrônico, em resposta fiel a uma ampla variedade de intensidades. Uma vez que esta informação é armazenada, ela pode ser demonstrada usando técnicas computadorizadas de imagem, permitindo variações de brilho e contraste e ampliação, sem a necessidade de exposições radiológicas adicionais para a paciente. Neste artigo, o estado atual da tecnologia em mamografia digital e dados sobre testes clínicos que dão suporte ao uso dessa tecnologia são revistos. Além disso, algumas aplicações potencialmente utilizáveis que estão sendo desenvolvidas com a mamografia digital são descritas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Digital elevation models (DEMs) are often used in landscape ecology to retrieve elevation or first derivative terrain attributes such as slope or aspect in the context of species distribution modelling. However, DEM-derived variables are scale-dependent and, given the increasing availability of very high-resolution (VHR) DEMs, their ecological relevancemust be assessed for different spatial resolutions. 2. In a study area located in the Swiss Western Alps, we computed VHR DEMs-derived variables related to morphometry, hydrology and solar radiation. Based on an original spatial resolution of 0.5 m, we generated DEM-derived variables at 1, 2 and 4 mspatial resolutions, applying a Gaussian Pyramid. Their associations with local climatic factors, measured by sensors (direct and ambient air temperature, air humidity and soil moisture) as well as ecological indicators derived fromspecies composition, were assessed with multivariate generalized linearmodels (GLM) andmixed models (GLMM). 3. Specific VHR DEM-derived variables showed significant associations with climatic factors. In addition to slope, aspect and curvature, the underused wetness and ruggedness indices modelledmeasured ambient humidity and soilmoisture, respectively. Remarkably, spatial resolution of VHR DEM-derived variables had a significant influence on models' strength, with coefficients of determination decreasing with coarser resolutions or showing a local optimumwith a 2 mresolution, depending on the variable considered. 4. These results support the relevance of using multi-scale DEM variables to provide surrogates for important climatic variables such as humidity, moisture and temperature, offering suitable alternatives to direct measurements for evolutionary ecology studies at a local scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To develop procedures to ensure consistency of printing quality of digital images, by means of hardcopy quantitative analysis based on a standard image. Materials and Methods Characteristics of mammography DI-ML and general purpose DI-HL films were studied through the QC-Test utilizing different processing techniques in a FujiFilm®-DryPix4000 printer. A software was developed for sensitometric evaluation, generating a digital image including a gray scale and a bar pattern to evaluate contrast and spatial resolution. Results Mammography films showed maximum optical density of 4.11 and general purpose films, 3.22. The digital image was developed with a 33-step wedge scale and a high-contrast bar pattern (1 to 30 lp/cm) for spatial resolution evaluation. Conclusion Mammographic films presented higher values for maximum optical density and contrast resolution as compared with general purpose films. The utilized digital processing technique could only change the image pixels matrix values and did not affect the printing standard. The proposed digital image standard allows greater control of the relationship between pixels values and optical density obtained in the analysis of films quality and printing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NlmCategory="UNASSIGNED">As opposed to the standard detective quantum efficiency (DQE), effective DQE (eDQE) is a figure of merit that allows comparing the performances of imaging systems in the presence of scatter rejection devices. The geometry of the EOS™ slot-scanning system is such that the detector is self-collimated and rejects scattered radiation. In this study, the EOS system was characterised using the eDQE in imaging conditions similar to those used in clinical practice: with phantoms of different widths placed in the X-ray beam, for various incident air kerma and tube voltages corresponding to the phantom thickness. Scatter fractions in EOS images were extremely low, around 2 % for all configurations. Maximum eDQE values spanned 9-14.8 % for a large range of air kerma at the detector plane from 0.01 to 1.34 µGy. These figures were obtained with non-optimised EOS setting but still over-performed most of the maximum eDQEs recently assessed for various computed radiology and digital radiology systems with antiscatter grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is oftenassociated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcificationsis performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcificationshave been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the senseof adding new features not only related to the shape