907 resultados para computer-based diagnostics
Resumo:
In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is oftenassociated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcificationsis performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcificationshave been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the senseof adding new features not only related to the shape
A new approach to segmentation based on fusing circumscribed contours, region growing and clustering
Resumo:
One of the major problems in machine vision is the segmentation of images of natural scenes. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. The main contours of the scene are detected and used to guide the posterior region growing process. The algorithm places a number of seeds at both sides of a contour allowing stating a set of concurrent growing processes. A previous analysis of the seeds permits to adjust the homogeneity criterion to the regions's characteristics. A new homogeneity criterion based on clustering analysis and convex hull construction is proposed
Resumo:
Aging is associated with common conditions, including cancer, diabetes, cardiovascular disease, and Alzheimer"s disease. The type of multi‐targeted pharmacological approach necessary to address a complex multifaceted disease such as aging might take advantage of pleiotropic natural polyphenols affecting a wide variety of biological processes. We have recently postulated that the secoiridoids oleuropein aglycone (OA) and decarboxymethyl oleuropein aglycone (DOA), two complex polyphenols present in health‐promoting extra virgin olive oil (EVOO), might constitute a new family of plant‐produced gerosuppressant agents. This paper describes an analysis of the biological activity spectra (BAS) of OA and DOA using PASS (Prediction of Activity Spectra for Substances) software. PASS can predict thousands of biological activities, as the BAS of a compound is an intrinsic property that is largely dependent on the compound"s structure and reflects pharmacological effects, physiological and biochemical mechanisms of action, and specific toxicities. Using Pharmaexpert, a tool that analyzes the PASS‐predicted BAS of substances based on thousands of"mechanism‐ effect" and"effect‐mechanism" relationships, we illuminate hypothesis‐generating pharmacological effects, mechanisms of action, and targets that might underlie the anti‐aging/anti‐cancer activities of the gerosuppressant EVOO oleuropeins.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
We've developed a new ambient occlusion technique based on an information-theoretic framework. Essentially, our method computes a weighted visibility from each object polygon to all viewpoints; we then use these visibility values to obtain the information associated with each polygon. So, just as a viewpoint has information about the model's polygons, the polygons gather information on the viewpoints. We therefore have two measures associated with an information channel defined by the set of viewpoints as input and the object's polygons as output, or vice versa. From this polygonal information, we obtain an occlusion map that serves as a classic ambient occlusion technique. Our approach also offers additional applications, including an importance-based viewpoint-selection guide, and a means of enhancing object features and producing nonphotorealistic object visualizations
Resumo:
Across Latin America 420 indigenous languages are spoken. Spanish is considered a second language in indigenous communities and is progressively introduced in education. However, most of the tools to support teaching processes of a second language have been developed for the most common languages such as English, French, German, Italian, etc. As a result, only a small amount of learning objects and authoring tools have been developed for indigenous people considering the specific needs of their population. This paper introduces Multilingual–Tiny as a web authoring tool to support the virtual experience of indigenous students and teachers when they are creating learning objects in indigenous languages or in Spanish language, in particular, when they have to deal with the grammatical structures of Spanish. Multilingual–Tiny has a module based on the Case-based Reasoning technique to provide recommendations in real time when teachers and students write texts in Spanish. An experiment was performed in order to compare some local similarity functions to retrieve cases from the case library taking into account the grammatical structures. As a result we found the similarity function with the best performance
Resumo:
Since the introduction of antibiotic agents, the amount and prevalence of Beta-lactam resistant enterobacteria has become an increasing problem. Many enterobacteria are opportunistic pathogens that easily acquire resistance mechanisms and genes, which make the situation menacing. These bacteria have acquired resistance and can hydrolyse extended spectrum cephalosporins and penicillins by producing enzymes called extended-spectrum Beta-lactamases (ESBLs). ESBL-producing bacteria are most commonly found in the gastro-intestinal tract of colonised patients. These resistant strains can be found in both health-care associated and community-acquired isolates. The detection and treatment of infections caused by bacteria producing ESBLs are problematic. This study investigated the genetic basis of extended-spectrum Beta-lactamases in Enterobacteriaceae, especially in Escherichia coli and Klebsiella pneumoniae isolates. A total of 994 Finnish Enterobacteriaceae strains, collected at 26 hospital laboratories, during 2000 and 2007 were analysed. For the genetic basis studies, PCR, sequencing and pyrosequencing methods were optimised. In addition, international standard methods, the agar dilution and disk diffusion methods were performed for the resistance studies, and the susceptibility of these strains was tested for antimicrobial agents that are used for treating patients. The genetic analysis showed that blaCTX-M was the most prevalent gene among the E. coli isolates, while blaSHV-12 was the most common Beta-lactamase gene in K. pneumoniae. The susceptibility testing results showed that about 60% of the strains were multidrug resistant. The prevalence of ESBL-producing isolates in Finland has been increasing since 2000. However, the situation in Finland is still much better than in many other European countries.
Resumo:
International energy and climate strategies also set Finland’s commitments to increasing the use of renewable energy sources and reducing greenhouse gas emissions. The target can be achieved by, for example, increasing the use of energy wood. Finland’s forest biomass potential is significant compared with current use. Increased use will change forest management and wood harvesting methods however. The thesis examined the potential for integrated pulp and paper mills to increase bioenergy production. The effects of two bioenergy production technologies on the carbon footprint of an integrated LWC mill were studied at mill level and from the cradle-to-customer approach. The LignoBoost process and FT diesel production were chosen as bioenergy cases. The data for the LignoBoost process were obtained from Metso and for the FT diesel process from Neste Oil. The rest of the information is based on the literature and databases of the KCL-ECO life-cycle computer program and Ecoinvent. In both case studies, the carbon footprint was reduced. From the results, it can be concluded that it is possible to achieve a fossil-fuel-free pulp mill with the LignoBoost process. By using steam from the FT diesel process, the amount of auxiliary fuel can be reduced considerably and the bark boiler can be replaced. With a choice of auxiliary fuels for use in heat production in the paper mill and the production methods for purchased electricity, it is possible to affect the carbon footprints even more in both cases.
Resumo:
Studies on 68Ga-Based Agents for PET Imaging of Cancer and Inflammation Positron emission tomography (PET) is based on the use of radiolabeled agents and facilitates in vivo imaging of biological processes, such as cancer. Because the detection of cancer is demanding and is often obscured by inflammation, there is a demand for better PET imaging agents. The aim was to preliminarily evaluate new PET agents for imaging cancer and inflammation using experimental models. 68Ga-chloride and peptides, 68Ga-labeled through 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA), targeting matrix metalloproteinase-9 (MMP-9) were tested for tumor imaging. In addition, a 68Ga-DOTA-conjugated peptide targeting vascular adhesion protein-1 (VAP-1), was tested for inflammation imaging. The 68Ga-based imaging agents described here showed potential features by passing the essential in vitro tests, proceeding further to preclinical in vivo evaluation and being able to visualize the target. The target uptake and target-to-background ratios of 68Ga-based agents were, however, not optimal. 68Ga-chloride showed slow clearance caused by its binding to blood transferrin. In the case of 68Ga-DOTA-peptides low in vivo stability and/or low lipophilicity led to too rapid blood clearance and urinary excretion. The properties of 68Ga-labeled peptides are modifiable, as shown with matrix metalloproteinase-9 targeting ligands. In the conclusion of this PhD thesis, 68Ga-based agents for PET imaging of cancer and inflammation could be applied in the development of drugs, earlier diagnostics and following-up of the efficacy of therapies.
Resumo:
In this thesis, a computer software for defining the geometry for a centrifugal compressor impeller is designed and implemented. The project is done under the supervision of Laboratory of Fluid Dynamics in Lappeenranta University of Technology. This thesis is similar to the thesis written by Tomi Putus (2009) in which a centrifugal compressor impeller flow channel is researched and commonly used design practices are reviewed. Putus wrote a computer software which can be used to define impeller’s three-dimensional geometry based on the basic geometrical dimensions given by a preliminary design. The software designed in this thesis is almost similar but it uses a different programming language (C++) and a different way to define the shape of the impeller meridional projection.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.
Resumo:
CHARGE syndrome, Sotos syndrome and 3p deletion syndrome are examples of rare inherited syndromes that have been recognized for decades but for which the molecular diagnostics only have been made possible by recent advances in genomic research. Despite these advances, development of diagnostic tests for rare syndromes has been hindered by diagnostic laboratories having limited funds for test development, and their prioritization of tests for which a (relatively) high demand can be expected. In this study, the molecular diagnostic tests for CHARGE syndrome and Sotos syndrome were developed, resulting in their successful translation into routine diagnostic testing in the laboratory of Medical Genetics (UTUlab). In the CHARGE syndrome group, mutation was identified in 40.5% of the patients and in the Sotos syndrome group, in 34%, reflecting the use of the tests in routine diagnostics in differential diagnostics. In CHARGE syndrome, the low prevalence of structural aberrations was also confirmed. In 3p deletion syndrome, it was shown that small terminal deletions are not causative for the syndrome, and that testing with arraybased analysis provides a reliable estimate of the deletion size but benign copy number variants complicate result interpretation. During the development of the tests, it was discovered that finding an optimal molecular diagnostic strategy for a given syndrome is always a compromise between the sensitivity, specificity and feasibility of applying a new method. In addition, the clinical utility of the test should be considered prior to test development: sometimes a test performing well in a laboratory has limited utility for the patient, whereas a test performing poorly in the laboratory may have a great impact on the patient and their family. At present, the development of next generation sequencing methods is changing the concept of molecular diagnostics of rare diseases from single tests towards whole-genome analysis.
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.