963 resultados para Images - Computational methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Underground constructions in soft ground may lead to settlement damage to existing buildings. In The Netherlands the situation is particularly complex, because of the combination of soft soil, fragile pile foundations and brittle, unreinforced masonry façades. The tunnelling design process in urban areas requires a reliable risk damage assessment. In the engineering practice the current preliminary damage assessment is based on the limiting tensile strain method (LTSM). Essentially this is an uncoupled analysis, in which the building is modelled as an elastic beam subject to imposed Greenfield settlements and the induced tensile strains are compared with a limit value for the material. The soil-structure interaction is included only as a ratio between the soil and the building stiffness. In this paper, a coupled approach is evaluated. The soil-structure interaction in terms of normal and shear behaviour is represented by interface elements and a cracking model for masonry is included. This project aims to improve the existing damage classification system for masonry buildings subjected to tunnel-induced settlement, in order to evaluate the necessity of strengthening techniques or mitigation measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides an overview of the developing needs for simulation software technologies for the computational modelling of problems that involve combinations of interactions amongst varying physical phenomena over a variety of time and space scales. Computational modelling of such problems requires software tech1nologies that enable the mathematical description of the interacting physical phenomena together with the solution of the resulting suites of equations in a numerically consistent and compatible manner. This functionality requires the structuring of simulation modules for specific physical phenomena so that the coupling can be effectively represented. These multi-physics and multi-scale computations are very compute intensive and the simulation software must operate effectively in parallel if it is to be used in this context. An approach to these classes of multi-disciplinary simulation in parallel is described, with some key examples of application to2 challenging engineering problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Accurate representation of the coupled effects between turbulent fluid flow with a free surface, heat transfer, solidification, and mold deformation has been shown to be necessary for the realistic prediction of several defects in castings and also for determining the final crystalline structure. A core component of the computational modeling of casting processes involves mold filling, which is the most computationally intensive aspect of casting simulation at the continuum level. Considering the complex geometries involved in shape casting, the evolution of the free surface, gas entrapment, and the entrainment of oxide layers into the casting make this a very challenging task in every respect. Despite well over 30 years of effort in developing algorithms, this is by no means a closed subject. In this article, we will review the full range of computational methods used, from unstructured finite-element (FE) and finite-volume (FV) methods through fully structured and block-structured approaches utilizing the cut-cell family of techniques to capture the geometric complexity inherent in shape casting. This discussion will include the challenges of generating rapid solutions on high-performance parallel cluster technology and how mold filling links in with the full spectrum of physics involved in shape casting. Finally, some indications as to novel techniques emerging now that can address genuinely arbitrarily complex geometries are briefly outlined and their advantages and disadvantages are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertation presented to obtain the Doutoramento (Ph.D.) degree in Biochemistry at the Instituto de Tecnologia Qu mica e Biol ogica da Universidade Nova de Lisboa

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The DNA G-qadruplexes are one of the targets being actively explored for anti-cancer therapy by inhibiting them through small molecules. This computational study was conducted to predict the binding strengths and orientations of a set of novel dimethyl-amino-ethyl-acridine (DACA) analogues that are designed and synthesized in our laboratory, but did not diffract in Synchrotron light.Thecrystal structure of DNA G-Quadruplex(TGGGGT)4(PDB: 1O0K) was used as target for their binding properties in our studies.We used both the force field (FF) and QM/MM derived atomic charge schemes simultaneously for comparing the predictions of drug binding modes and their energetics. This study evaluates the comparative performance of fixed point charge based Glide XP docking and the quantum polarized ligand docking schemes. These results will provide insights on the effects of including or ignoring the drug-receptor interfacial polarization events in molecular docking simulations, which in turn, will aid the rational selection of computational methods at different levels of theory in future drug design programs. Plenty of molecular modelling tools and methods currently exist for modelling drug-receptor or protein-protein, or DNA-protein interactionssat different levels of complexities.Yet, the capasity of such tools to describevarious physico-chemical propertiesmore accuratelyis the next step ahead in currentresearch.Especially, the usage of most accurate methods in quantum mechanics(QM) is severely restricted by theirtedious nature. Though the usage of massively parallel super computing environments resulted in a tremendous improvement in molecular mechanics (MM) calculations like molecular dynamics,they are still capable of dealing with only a couple of tens to hundreds of atoms for QM methods. One such efficient strategy that utilizes thepowers of both MM and QM are the QM/MM hybrid methods. Lately, attempts have been directed towards the goal of deploying several different QM methods for betterment of force field based simulations, but with practical restrictions in place. One of such methods utilizes the inclusion of charge polarization events at the drug-receptor interface, that is not explicitly present in the MM FF.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, as well as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of(human) raters’ visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithm which quantifies the vegetation cover was able to process 98% of the im-age data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%.Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance.A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust monitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed form the foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recovering the control or implicit geometry underlying temple architecture requires bringing together fragments of evidence from field measurements, relating these to mathematical and geometric descriptions in canonical texts and proposing "best-fit" constructive models. While scholars in the field have traditionally used manual methods, the innovative application of niche computational techniques can help extend the study of artefact geometry. This paper demonstrates the application of a hybrid computational approach to the problem of recovering the surface geometry of early temple superstructures. The approach combines field measurements of temples, close-range architectural photogrammetry, rule-based generation and parametric modelling. The computing of surface geometry comprises a rule-based global model governing the overall form of the superstructure, several local models for individual motifs using photogrammetry and an intermediate geometry model that combines the two. To explain the technique and the different models, the paper examines an illustrative example of surface geometry reconstruction based on studies undertaken on a tenth century stone superstructure from western India. The example demonstrates that a combination of computational methods yields sophisticated models of the constructive geometry underlying temple form and that these digital artefacts can form the basis for in depth comparative analysis of temples, arising out of similar techniques, spread over geography, culture and time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an overview of modeling light propagation through biological media by solving the photon transport equation. Different variants of the photon transport equation (PTE) are discussed. Several methods for modeling static distributions and the transient response are presented. A discussion on how to mix and match electromagnetic problems with the PTE is also provided.