943 resultados para CT images subject-specific design
Resumo:
Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé
Resumo:
Almost thirty years ago, as the social sciences underwent their 'discursive turn', Bernardo Secchi (1984) drew, in what he called the 'urban planning narrative', the attention of planners to the production of myths, turning an activity often seen as primarily technical into one centred around the production of images and ideas. This conception of planning practice gave rise to a powerful current of research in English-speaking countries. Efforts were made to both combine the urban planning narrative with storytelling and to establish storytelling as a prescriptive or descriptive model for planning practice. Thus, just as storytelling is supposed to have led democratic communication off track through a pronounced concern for a good story, storytelling applied to the field of urban production may have led to an increasing preoccupation with staging and showmanship for projects to the detriment of their real inclusion in political debate. It is this possible transformation of the territorial action that will be the focus of the articles collected in this special issue of Articulo - Journal of urban research.
Resumo:
Tutkimuksen tavoitteena oli selvittää case-yrityksen kahden liiketoimintaprosessin toimintojen laatu. Tutkimuskohteena oli nykyisen laadunhallintajärjestelmän toimivuus sekä prosessiajattelun kypsyys yritysjohdon ja henkilöstön näkökulmasta. Liiketoimintaprosessit valittiin tarkastelukohteeksi, koska ne muodostavat toiminnan luonnollisen etenemisen kaikissa organisaatiomuodoissa, ja prosessiajattelu on myös pohjana ISO 9000 -standardissa. Tutkimuksen teoriaosassa määriteltiin laadun käsitteet, toiminnan kehittämisen perusteet ja laadun hallintajärjestelmät. Tämän lisäksi tarkasteltiin kahta tutkimusta laatuyrityksistä ja laadun kehittämisestä. Tutkimuksen empiirinen osa toteutettiin kvalitatiivisena tapaustutkimuksena. Liiketoimintaprosesseihin perustuvaa toiminnan laatua tarkasteltiin yritysjohdon ja henkilöstön näkökulmasta. Lisäksi myynti- ja tuotantoprosessien osalta selvitettiin erot halutun toiminnan ja toteutuneen toiminnan välillä. Tutkimuksen mukaan myynti- ja tuotantoprosessit olivat toteutuneen toiminnan mukaisia pienin muutoksin ja tarkennuksin. Tutkimukset toteutettiin kyselynä ja haastatteluina.
Resumo:
This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.
Resumo:
An important aspect of immune monitoring for vaccine development, clinical trials, and research is the detection, measurement, and comparison of antigen-specific T-cells from subject samples under different conditions. Antigen-specific T-cells compose a very small fraction of total T-cells. Developments in cytometry technology over the past five years have enabled the measurement of single-cells in a multivariate and high-throughput manner. This growth in both dimensionality and quantity of data continues to pose a challenge for effective identification and visualization of rare cell subsets, such as antigen-specific T-cells. Dimension reduction and feature extraction play pivotal role in both identifying and visualizing cell populations of interest in large, multi-dimensional cytometry datasets. However, the automated identification and visualization of rare, high-dimensional cell subsets remains challenging. Here we demonstrate how a systematic and integrated approach combining targeted feature extraction with dimension reduction can be used to identify and visualize biological differences in rare, antigen-specific cell populations. By using OpenCyto to perform semi-automated gating and features extraction of flow cytometry data, followed by dimensionality reduction with t-SNE we are able to identify polyfunctional subpopulations of antigen-specific T-cells and visualize treatment-specific differences between them.
Resumo:
The creation of the European Higher Education Area has meant a number of significant changes to the educational structures of the university community. In particular, the new system of European credits has generated the need for innovation in the design of curricula and teaching methods. In this paper, we propose debating as a classroom tool that can help fulfill these objectives by promoting an active student role in learning. To demonstrate the potential of this tool, a classroom experiment was conducted in a bachelor’s degree course in Industrial Economics -Regulation and Competition-, involving a case study in competition policy and incorporating the techniques of a conventional debate -presentation of standpoints, turns, right to reply and summing up-. The experiment yielded gains in student attainment and positive assessments of the subject. In conclusion, the incorporation of debating activities helps students to acquire the skills, be they general or specific, required to graduate successfully in Economics.
Resumo:
This study extends the standard econometric treatment of appellate court outcomes by 1) considering the role of decision-maker effort and case complexity, and 2) adopting a multi-categorical selection process of appealed cases. We find evidence of appellate courts being affected by both the effort made by first-stage decision makers and case complexity. This illustrates the value of widening the narrowly defined focus on heterogeneity in individual-specific preferences that characterises many applied studies on legal decision-making. Further, the majority of appealed cases represent non-random sub-samples and the multi-categorical selection process appears to offer advantages over the more commonly used dichotomous selection models.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been used successfully in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits; to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been successfully used in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits, to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
Craving is considered the main variable associated with relapse after smoking cessation. Cue Exposure Therapy (CET) consists of controlled and repeated exposure to drug-related cues with the aim of extinguishing craving responses. Some virtual reality (VR) environments, such as virtual bars or parties, have previously shown their efficacy as tools for eliciting smoking craving. However, in order to adapt this technology to smoking cessation interventions, there is a need for more diverse environments that enhance the probability of generalization of extinction in real life. The main objective of this study was to identify frequent situations that produce smoking craving, as well as detecting specific craving cues in those contexts. Participants were 154 smokers who responded to an ad hoc self-administered inventory for assessing craving level in 12 different situations. Results showed that having a drink in a bar/pub at night, after having lunch/dinner in a restaurant and having a coffee in a cafe or after lunch/dinner at home were reported as the most craving-inducing scenarios. Some differences were found with regard to participants' gender, age, and number of cigarettes smoked per day. Females, younger people, and heavier smokers reported higher levels of craving in most situations. In general, the most widely cited specific cues across the contexts were people smoking, having a coffee, being with friends, and having finished eating. These results are discussed with a view to their consideration in the design of valid and reliable VR environments that could be used in the treatment of nicotine addicts who wish to give up smoking.
Resumo:
OBJECTIVE: To evaluate lung fissures completeness, post-treatment radiological response and quantitative CT analysis (QCTA) in a population of severe emphysematous patients submitted to endobronchial valves (EBV) implantation. MATERIALS AND METHODS: Multi-detectors CT exams of 29 patients were studied, using thin-section low dose protocol without contrast. Two radiologists retrospectively reviewed all images in consensus; fissures completeness was estimated in 5% increments and post-EBV radiological response (target lobe atelectasis/volume loss) was evaluated. QCTA was performed in pre and post-treatment scans using a fully automated software. RESULTS: CT response was present in 16/29 patients. In the negative CT response group, all 13 patients presented incomplete fissures, and mean oblique fissures completeness was 72.8%, against 88.3% in the other group. QCTA most significant results showed a reduced post-treatment total lung volume (LV) (mean 542 ml), reduced EBV-submitted LV (700 ml) and reduced emphysema volume (331.4 ml) in the positive response group, which also showed improved functional tests. CONCLUSION: EBV benefit is most likely in patients who have complete interlobar fissures and develop lobar atelectasis. In patients with no radiological response we observed a higher prevalence of incomplete fissures and a greater degree of incompleteness. The fully automated QCTA detected the post-treatment alterations, especially in the treated lung analysis.
Resumo:
Background: Pulseless electrical activity (PEA) cardiac arrest is defined as a cardiac arrest (CA) presenting with a residual organized electrical activity on the electrocardiogram. In the last decades, the incidence of PEA has regularly increased, compared to other types of CA like ventricular fibrillation or pulseless ventricular tachycardia. PEA is frequently induced by reversible conditions. The "4 (or 5) H" & "4 (or 5) T" are proposed as a mnemonic to asses for Hypoxia, Hypovolemia, Hypo- /Hyperkalaemia, Hypothermia, Thrombosis (cardiac or pulmonary), cardiac Tamponade, Toxins, and Tension pneumothorax. Other pathologies (intracranial haemorrhage, severe sepsis, myocardial contraction dysfunction) have been identified as potential causes for PEA, but their respective probability and frequencies are unclear and they are not yet included into the resuscitation guidelines. The aim of this study was to analyse the aetiologies of PEA out-of-hospital CA, in order to evaluate the relative frequencies of each cause and therefore to improve the management of patients suffering a PEA cardiac arrest. Method: This retrospective study was based on data routinely and prospectively collected for each PEMS intervention. All adult patients treated from January 1st 2002 to December 2012 31st by the PEMS for out-of-hospital cardiac arrest, with PEA as the first recorded rhythm, and admitted to the emergency department (ED) of the Lausanne University Hospital were included. The aetiologies of PEA cardiac arrest were classified into subgroups, based on the classical H&T's classification, supplemented by four other subgroups analysis: trauma, intra-cranial haemorrhage (ICH), non-ischemic cardiomyopathy (NIC) and undetermined cause. Results: 1866 OHCA were treated by the PEMS. PEA was the first recorded rhythm in 240 adult patients (13.8 %). After exclusion of 96 patients, 144 patients with a PEA cardiac arrest admitted to the ED were included in the analysis. The mean age was 63.8 ± 20.0 years, 58.3% were men and the survival rate at 48 hours was 29%. 32 different causes of OHCA PEA were established for 119 patients. For 25 patients (17.4 %), we were unable to attribute a specific cause for the PEA cardiac arrest. Hypoxia (23.6 %), acute coronary syndrome (12.5%) and trauma (12.5 %) were the three most frequent causes. Pulmonary embolism, Hypovolemia, Intoxication and Hyperkaliemia occurs in less than 10% of the cases (7.6 %, 5.6 %, 3.5%, respectively 2.1 %). Non ischemic cardiomyopathy and intra-cranial haemorrhage occur in 8.3 % and 6.9 %, respectively. Conclusions: According to our results, intra-cranial haemorrhage and non-ischemic cardiomyopathy represent noticeable causes of PEA in OHCA, with a prevalence equalling or exceeding the frequency of classical 4 H's and 4 T's aetiologies. These two pathologies are potentially accessible to simple diagnostic procedures (native CT-scan or echocardiography) and should be included into the 4 H's and 4 T's mnemonic.
Resumo:
Landslide processes can have direct and indirect consequences affecting human lives and activities. In order to improve landslide risk management procedures, this PhD thesis aims to investigate capabilities of active LiDAR and RaDAR sensors for landslides detection and characterization at regional scales, spatial risk assessment over large areas and slope instabilities monitoring and modelling at site-specific scales. At regional scales, we first demonstrated recent boat-based mobile LiDAR capabilities to model topography of the Normand coastal cliffs. By comparing annual acquisitions, we validated as well our approach to detect surface changes and thus map rock collapses, landslides and toe erosions affecting the shoreline at a county scale. Then, we applied a spaceborne InSAR approach to detect large slope instabilities in Argentina. Based on both phase and amplitude RaDAR signals, we extracted decisive information to detect, characterize and monitor two unknown extremely slow landslides, and to quantify water level variations of an involved close dam reservoir. Finally, advanced investigations on fragmental rockfall risk assessment were conducted along roads of the Val de Bagnes, by improving approaches of the Slope Angle Distribution and the FlowR software. Therefore, both rock-mass-failure susceptibilities and relative frequencies of block propagations were assessed and rockfall hazard and risk maps could be established at the valley scale. At slope-specific scales, in the Swiss Alps, we first integrated ground-based InSAR and terrestrial LiDAR acquisitions to map, monitor and model the Perraire rock slope deformation. By interpreting both methods individually and originally integrated as well, we therefore delimited the rockslide borders, computed volumes and highlighted non-uniform translational displacements along a wedge failure surface. Finally, we studied specific requirements and practical issues experimented on early warning systems of some of the most studied landslides worldwide. As a result, we highlighted valuable key recommendations to design new reliable systems; in addition, we also underlined conceptual issues that must be solved to improve current procedures. To sum up, the diversity of experimented situations brought an extensive experience that revealed the potential and limitations of both methods and highlighted as well the necessity of their complementary and integrated uses.
Resumo:
Laser scanning is becoming an increasingly popular method for measuring 3D objects in industrial design. Laser scanners produce a cloud of 3D points. For CAD software to be able to use such data, however, this point cloud needs to be turned into a vector format. A popular way to do this is to triangulate the assumed surface of the point cloud using alpha shapes. Alpha shapes start from the convex hull of the point cloud and gradually refine it towards the true surface of the object. Often it is nontrivial to decide when to stop this refinement. One criterion for this is to do so when the homology of the object stops changing. This is known as the persistent homology of the object. The goal of this thesis is to develop a way to compute the homology of a given point cloud when processed with alpha shapes, and to infer from it when the persistent homology has been achieved. Practically, the computation of such a characteristic of the target might be applied to power line tower span analysis.