1000 resultados para Tissue embedding
Resumo:
One of the major problems facing aquaculture is the inadequate supply of fish oil mostly used for fish feed manufacturing. The continued growth in aquaculture production cannot depend on this finite feed resources, therefore, it is imperative that cheap and readily available substitutes that do not compromise fish growth and fillet quality be found. To achieve this, a 12-week feeding trial with Heterobranchus longifilis fed diets differing in lipid source was conducted. Diets were supplemented with 6% lipid as fish oil, soybean oil, palm oil, coconut oil, groundnut oil and melon seed oil. Triplicate groups of 20 H. longifilis were fed the experimental diets two times a day to apparent satiation, over 84 days. Growth, digestibility, and muscle fatty acid profile were measured to assess diet effects. At the end of the study, survival, feed intake and hepatosomatic index were similar for fish fed experimental diets. However, weight gain, SGR and FCR of fish fed soybean oil-based diet was significantly reduced. Apparent nutrient digestibility coefficients were significantly lower in fish fed soybean, coconut and groundnut oil-based diets. Fillet and hepatic fatty acid compositions differed and reflected the fatty acid compositions of the diets. Docosahexaenoic acid (22:6n-3), 20:5n-3 and 20:4n-6 were conserved in vegetable oils-based diets fed fish possibly due to synthesis of HUFA from 18:3n-3 and 18:4n-6. Palm oil diet was the least expensive, and had the best economic conversion ratio. The use of vegetable oils in the diets had positive effect on growth and fillet composition of H. longifilis.
Resumo:
Segmentation of medical imagery is a challenging problem due to the complexity of the images, as well as to the absence of models of the anatomy that fully capture the possible deformations in each structure. Brain tissue is a particularly complex structure, and its segmentation is an important step for studies in temporal change detection of morphology, as well as for 3D visualization in surgical planning. In this paper, we present a method for segmentation of brain tissue from magnetic resonance images that is a combination of three existing techniques from the Computer Vision literature: EM segmentation, binary morphology, and active contour models. Each of these techniques has been customized for the problem of brain tissue segmentation in a way that the resultant method is more robust than its components. Finally, we present the results of a parallel implementation of this method on IBM's supercomputer Power Visualization System for a database of 20 brain scans each with 256x256x124 voxels and validate those against segmentations generated by neuroanatomy experts.
Resumo:
When triangulating a belief network we aim to obtain a junction tree of minimum state space. Searching for the optimal triangulation can be cast as a search over all the permutations of the network's vaeriables. Our approach is to embed the discrete set of permutations in a convex continuous domain D. By suitably extending the cost function over D and solving the continous nonlinear optimization task we hope to obtain a good triangulation with respect to the aformentioned cost. In this paper we introduce an upper bound to the total junction tree weight as the cost function. The appropriatedness of this choice is discussed and explored by simulations. Then we present two ways of embedding the new objective function into continuous domains and show that they perform well compared to the best known heuristic.
Resumo:
The release of growth factors from tissue engineering scaffolds provides signals that influence the migration, differentiation, and proliferation of cells. The incorporation of a drug delivery platform that is capable of tunable release will give tissue engineers greater versatility in the direction of tissue regeneration. We have prepared a novel composite of two biomaterials with proven track records - apatite and poly(lactic-co-glycolic acid) (PLGA) – as a drug delivery platform with promising controlled release properties. These composites have been tested in the delivery of a model protein, bovine serum albumin (BSA), as well as therapeutic proteins, recombinant human bone morphogenetic protein-2 (rhBMP-2) and rhBMP-6. The controlled release strategy is based on the use of a polymer with acidic degradation products to control the dissolution of the basic apatitic component, resulting in protein release. Therefore, any parameter that affects either polymer degradation or apatite dissolution can be used to control protein release. We have modified the protein release profile systematically by varying the polymer molecular weight, polymer hydrophobicity, apatite loading, apatite particle size, and other material and processing parameters. Biologically active rhBMP-2 was released from these composite microparticles over 100 days, in contrast to conventional collagen sponge carriers, which were depleted in approximately 2 weeks. The released rhBMP-2 was able to induce elevated alkaline phosphatase and osteocalcin expression in pluripotent murine embryonic fibroblasts. To augment tissue engineering scaffolds with tunable and sustained protein release capabilities, these composite microparticles can be dispersed in the scaffolds in different combinations to obtain a superposition of the release profiles. We have loaded rhBMP-2 into composite microparticles with a fast release profile, and rhBMP-6 into slow-releasing composite microparticles. An equi-mixture of these two sets of composite particles was then injected into a collagen sponge, allowing for dual release of the proteins from the collagenous scaffold. The ability of these BMP-loaded scaffolds to induce osteoblastic differentiation in vitro and ectopic bone formation in a rat model is being investigated. We anticipate that these apatite-polymer composite microparticles can be extended to the delivery of other signalling molecules, and can be incorporated into other types of tissue engineering scaffolds.
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, and many others, offer tremendous promise for the personalized treatment of cancer. These technologies measure the amount of specific proteins, RNA, DNA or other molecular targets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumor specimens are heterogeneous; an individual specimen typically contains unknown amounts of multiple tissues types. Thus, the measured molecular concentrations result from an unknown mixture of tissue types, and must be normalized to account for the composition of the mixture. For example, a breast tumor biopsy may contain normal, dysplastic and cancerous epithelial cells, as well as stromal components (fatty and connective tissue) and blood and lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic and cancerous epithelial cells. The remaining tissue components serve to “contaminate” the signal of interest. The proportion of each of the tissue components changes as a function of patient characteristics (e.g., age), and varies spatially across the tumor region. Because each of the tissue components produces a different molecular signature, and the amount of each tissue type is specimen dependent, we must estimate the tissue composition of the specimen, and adjust the molecular signal for this composition. Using the idea of a chemical mass balance, we consider the total measured concentrations to be a weighted sum of the individual tissue signatures, where weights are determined by the relative amounts of the different tissue types. We develop a compositional source apportionment model to estimate the relative amounts of tissue components in a tumor specimen. We then use these estimates to infer the tissuespecific concentrations of key molecular targets for sub-typing individual tumors. We anticipate these specific measurements will greatly improve our ability to discriminate between different classes of tumors, and allow more precise matching of each patient to the appropriate treatment
Resumo:
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal
Resumo:
It has been shown that the accuracy of mammographic abnormality detection methods is strongly dependent on the breast tissue characteristics, where a dense breast drastically reduces detection sensitivity. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. Here, we describe the development of an automatic breast tissue classification methodology, which can be summarized in a number of distinct steps: 1) the segmentation of the breast area into fatty versus dense mammographic tissue; 2) the extraction of morphological and texture features from the segmented breast areas; and 3) the use of a Bayesian combination of a number of classifiers. The evaluation, based on a large number of cases from two different mammographic data sets, shows a strong correlation ( and 0.67 for the two data sets) between automatic and expert-based Breast Imaging Reporting and Data System mammographic density assessment
Resumo:
A new approach to mammographic mass detection is presented in this paper. Although different algorithms have been proposed for such a task, most of them are application dependent. In contrast, our approach makes use of a kindred topic in computer vision adapted to our particular problem. In this sense, we translate the eigenfaces approach for face detection/classification problems to a mass detection. Two different databases were used to show the robustness of the approach. The first one consisted on a set of 160 regions of interest (RoIs) extracted from the MIAS database, being 40 of them with confirmed masses and the rest normal tissue. The second set of RoIs was extracted from the DDSM database, and contained 196 RoIs containing masses and 392 with normal, but suspicious regions. Initial results demonstrate the feasibility of using such approach with performances comparable to other algorithms, with the advantage of being a more general, simple and cost-effective approach
Resumo:
Emotions are crucial for user's decision making in recommendation processes. We first introduce ambient recommender systems, which arise from the analysis of new trends on the exploitation of the emotional context in the next generation of recommender systems. We then explain some results of these new trends in real-world applications through the smart prediction assistant (SPA) platform in an intelligent learning guide with more than three million users. While most approaches to recommending have focused on algorithm performance. SPA makes recommendations to users on the basis of emotional information acquired in an incremental way. This article provides a cross-disciplinary perspective to achieve this goal in such recommender systems through a SPA platform. The methodology applied in SPA is the result of a bunch of technology transfer projects for large real-world rccommender systems
Resumo:
This paper introduces how artificial intelligence technologies can be integrated into a known computer aided control system design (CACSD) framework, Matlab/Simulink, using an object oriented approach. The aim is to build a framework to aid supervisory systems analysis, design and implementation. The idea is to take advantage of an existing CACSD framework, Matlab/Simulink, so that engineers can proceed: first to design a control system, and then to design a straightforward supervisory system of the control system in the same framework. Thus, expert systems and qualitative reasoning tools are incorporated into this popular CACSD framework to develop a computer aided supervisory system design (CASSD) framework. Object-variables an introduced into Matlab/Simulink for sharing information between tools
Resumo:
El Antígeno Leucocitario Humano (HLA en inglés) ha sido descrito en muchos casos como factor de pronóstico para cáncer. La característica principal de los genes de HLA, localizados en el cromosoma 6 (6p21.3), son sus numerosos polimorfismos. Los análisis de secuencia de nucleótidos muestran que la variación está restringida predominantemente a los exones que codifican los dominios de unión a péptidos de la proteína. Por lo tanto, el polimorfismo del HLA define el repertorio de péptidos que se unen a los alotipos de HLA y este hecho define la habilidad de un individuo para responder a la exposición a muchos agentes infecciosos durante su vida. La tipificación de HLA se ha convertido en un análisis importante en clínica. Muestras de tejido embebidas en parafina y fijadas con formalina (FFPE en inglés) son recolectadas rutinariamente en oncología. Este procedimiento podría ser utilizado como una buena fuente de ADN, dado que en estudios en el pasado los ensayos de recolección de ADN no eran normalmente llevados a cabo de casi ningún tejido o muestra en procedimientos clínicos regulares. Teniendo en cuenta que el problema más importante con el ADN de muestras FFPE es la fragmentación, nosotros propusimos un nuevo método para la tipificación del alelo HLA-A desde muestras FFPE basado en las secuencias del exón 2, 3 y 4. Nosotros diseñamos un juego de 12 cebadores: cuatro para el exón 2 de HLA-A, tres para el exón 3 de HLA-A y cinco para el exón 4 de HLA-A, cada uno de acuerdo las secuencias flanqueantes de su respectivo exón y la variación en la secuencia entre diferentes alelos. 17 muestran FFPE colectadas en el Hospital Universitario de Karolinska en Estocolmo Suecia fueron sometidas a PCR y los productos fueron secuenciados. Finalmente todas las secuencias obtenidas fueron analizadas y comparadas con la base de datos del IMGT-HLA. Las muestras FFPE habían sido previamente tipificadas para HLA y los resultados fueron comparados con los de este método. De acuerdo con nuestros resultados, las muestras pudieron ser correctamente secuenciadas. Con este procedimiento, podemos concluir que nuestro estudio es el primer método de tipificación basado en secuencia que permite analizar muestras viejas de ADN de las cuales no se tiene otra fuente. Este estudio abre la posibilidad de desarrollar análisis para establecer nuevas relaciones entre HLA y diferentes enfermedades como el cáncer también.
Resumo:
Experimental and epidemiological studies demonstrate that fetal growth restriction and low birth weight enhance the risk of chronic diseases in adulthood. Derangements in tissue-specific epigenetic programming of fetal and placental tissues are a suggested mechanism of which DNA methylation is best understood. DNA methylation profiles in human tissue are mostly performed in DNA from white blood cells. The objective of this study was to assess DNA methylation profiles of IGF2 DMR and H19 in DNA derived from four tissues of the newborn. We obtained from 6 newborns DNA from fetal placental tissue (n = 5), umbilical cord CD34+ hematopoietic stem cells (HSC) and CD34- mononuclear cells (MNC) (n = 6), and umbilical cord Wharton jelly (n = 5). HCS were isolated using magnetic-activated cell separation. DNA methylation of the imprinted fetal growth genes IGF2 DMR and H19 was measured in all tissues using quantitative mass spectrometry. ANOVA testing showed tissue-specific differences in DNA methylation of IGF2 DMR (p value 0.002) and H19 (p value 0.001) mainly due to a higher methylation of IGF2 DMR in Wharton jelly (mean 0.65, sd 0.14) and a lower methylation of H19 in placental tissue (mean 0.25, sd 0.02) compared to other tissues. This study demonstrates the feasibility of the assessment of differential tissue specific DNA methylation. Although the results have to be confirmed in larger sample sizes, our approach gives opportunities to investigate epigenetic profiles as underlying mechanism of associations between pregnancy exposures and outcome, and disease risks in later life.
Resumo:
RDFa JSON-LD Microdata
Resumo:
We have compared properties of roots from different lines (genotypes) of tobacco raised either in tissue culture or grown from seed. The different lines included unmodified plants and plants modified to express reduced activity of the enzyme cinnamoyl-CoA reductase, which has a pivotal role in lignin biosynthesis. The size and structure of the rhizosphere microbial community, characterized by adenosine triphosphate and phospholipid fatty acid analyses, were related to root chemistry (specifically the soluble carbohydrate concentration) and decomposition rate of the roots. The root material from unmodified plants decomposed faster following tissue culture compared with seed culture, and the faster decomposing material had significantly higher soluble carbohydrate concentrations. These observations are linked to the larger microbial biomass and greater diversity of the rhizosphere communities of tissue culture propagated plants.