905 resultados para CENTERLINE EXTRACTION
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
A numerical method to analyse the stability of transverse galloping based on experimental measurements, as an alternative method to polynomial fitting of the transverse force coefficient Cz, is proposed in this paper. The Glauert–Den Hartog criterion is used to determine the region of angles of attack (pitch angles) prone to present galloping. An analytic solution (based on a polynomial curve of Cz) is used to validate the method and to evaluate the discretization errors. Several bodies (of biconvex, D-shape and rhomboidal cross sections) have been tested in a wind tunnel and the stability of the galloping region has been analysed with the new method. An algorithm to determine the pitch angle of the body that allows the maximum value of the kinetic energy of the flow to be extracted is presented.
Resumo:
In this work, an analytical method was developed for the determination of pharmaceutical drugs inbiosolids. Samples were extracted with an acidic mixture of water and acetone (1:2, v/v) and supportedliquid extraction was used for the clean-up of extracts, eluting with ethyl acetate:methanol (90:10, v/v).The compounds were determined by gas chromatography?tandem mass spectrometry using matrix-match calibration after silylation to form their t-butyldimethylsilyl derivatives. This method presentsvarious advantages, such as a fairly simple operation for the analysis of complex matrices, the use ofinexpensive glassware and low solvent volumes. Satisfactory mean recoveries were obtained with thedeveloped method ranging from 70 to 120% with relative standard deviations (RSDs) ? 13%, and limitsof detection between 0.5 and 3.6 ng g?1. The method was then successfully applied to biosolids samplescollected in Madrid and Catalonia (Spain). Eleven of the sixteen target compounds were detected in thestudied samples, at levels up to 1.1 ?g g?1(salicylic acid). Ibuprofen, caffeine, paracetamol and fenofibratewere detected in all of the samples analyzed.
Resumo:
The importance of cholesterol for endocytosis has been investigated in HEp-2 and other cell lines by using methyl-β-cyclodextrin (MβCD) to selectively extract cholesterol from the plasma membrane. MβCD treatment strongly inhibited endocytosis of transferrin and EGF, whereas endocytosis of ricin was less affected. The inhibition of transferrin endocytosis was completely reversible. On removal of MβCD it was restored by continued incubation of the cells even in serum-free medium. The recovery in serum-free medium was inhibited by addition of lovastatin, which prevents cholesterol synthesis, but endocytosis recovered when a water-soluble form of cholesterol was added together with lovastatin. Electron microscopical studies of MβCD-treated HEp-2 cells revealed that typical invaginated caveolae were no longer present. Moreover, the invagination of clathrin-coated pits was strongly inhibited, resulting in accumulation of shallow coated pits. Quantitative immunogold labeling showed that transferrin receptors were concentrated in coated pits to the same degree (approximately sevenfold) after MβCD treatment as in control cells. Our results therefore indicate that although clathrin-independent (and caveolae-independent) endocytosis still operates after removal of cholesterol, cholesterol is essential for the formation of clathrin-coated endocytic vesicles.
Resumo:
Matrix-assisted laser desorption/ionization (MALDI) time of flight mass spectrometry was used to detect and order DNA fragments generated by Sanger dideoxy cycle sequencing. This was accomplished by improving the sensitivity and resolution of the MALDI method using a delayed ion extraction technique (DE-MALDI). The cycle sequencing chemistry was optimized to produce as much as 100 fmol of each specific dideoxy terminated fragment, generated from extension of a 13-base primer annealed on 40- and 50-base templates. Analysis of the resultant sequencing mixture by DE-MALDI identified the appropriate termination products. The technique provides a new non-gel-based method to sequence DNA which may ultimately have considerable speed advantages over traditional methodologies.
Resumo:
Soil vapor extraction (SVE)systems can be used to remediate enviornmental sites that have been contaminated with petroleum products. However, SVE systems rely on pore space in soils to draw the vapors through the soil, creating a vacuum. Therefore, SVE systems are not as effective when used in low permeability soils. This study aims to determine whether SVE systems can be used on low permeability soils in conjunction with companion technologies. The results indicate that SVE systems can be utilized in low permeability soils if used in conjunction with companion technologies that increase soil permeability and cantaminant volatilization. The promising companion technology is six-phase soil heating, based on contamination removal rate and cost estimates.
Resumo:
In this paper we present an automatic system for the extraction of syntactic semantic patterns applied to the development of multilingual processing tools. In order to achieve optimum methods for the automatic treatment of more than one language, we propose the use of syntactic semantic patterns. These patterns are formed by a verbal head and the main arguments, and they are aligned among languages. In this paper we present an automatic system for the extraction and alignment of syntactic semantic patterns from two manually annotated corpora, and evaluate the main linguistic problems that we must deal with in the alignment process.
Resumo:
Comunicación presentada en el VIII Simposium Nacional de Reconocimiento de Formas y Análisis de Imágenes, Bilbao, mayo 1999.
Resumo:
Paper submitted to the 39th International Symposium on Robotics ISR 2008, Seoul, South Korea, October 15-17, 2008.
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.
Resumo:
This article summarizes research on the application of a conductive cement paste as an anode in the now classical technique of electrochemical extraction of chlorides applied to a concrete structural element by spraying the paste on the surface of a concrete structural element, a pillar. Sprayed conductive cement paste, by adding graphite powder, is particularly useful to treat sizable vertical surfaces such are structural supports. Outcomes indicate that this kind of anode not only provides electrochemical chloride removal with similar efficiency, but also is able to retain moisture even without the use of a continuous dampening system.
Resumo:
Quaternary ammonium-functionalized silica materials were synthesized and applied for solid-phase extraction (SPE) of aromatic amines, which are classified as priority pollutants by US Environmental Protection Agency. Hexamethylenetetramine used for silica surface modification for the first time was employed as SPE sorbent under normal phase conditions. Hexaminium-functionalized silica demonstrated excellent extraction efficiencies for o-toluidine, 4-ethylaniline and quinoline (recoveries 101–107%), while for N,N-dimethylaniline and N-isopropylaniline recoveries were from low to moderate (14–46%). In addition, the suitability of 1-alkyl-3-(propyl-3-sulfonate) imidazolium-functionalized silica as SPE sorbent was tested under normal phase conditions. The recoveries achieved for the five aromatic amines ranged from 89 to 99%. The stability of the sorbent was evaluated during and after 150 extractions. Coefficients of variation between 4.5 and 10.2% proved a high stability of the synthesized sorbent. Elution was carried out using acetonitrile in the case of hexaminium-functionalized silica and water for 1-alkyl-3-(propyl-3-sulfonate) imidazolium-functionalized silica sorbent. After the extraction the analytes were separated and detected by liquid chromatography ultraviolet detection (LC-UV). The retention mechanism of the materials was primarily based on polar hydrogen bonding and π–π interactions. Comparison made with activated silica proved the quaternary ammonium-functionalized materials to offer different selectivity and better extraction efficiencies for aromatic amines. Finally, 1-alkyl-3-(propyl-3-sulfonate) imidazolium-functionalized silica sorbent was successfully tested for the extraction of wastewater and soil samples.
Resumo:
Material completo EIT
Resumo:
A novel approach is presented, whereby gold nanostructured screen-printed carbon electrodes (SPCnAuEs) are combined with in-situ ionic liquid formation dispersive liquid–liquid microextraction (in-situ IL-DLLME) and microvolume back-extraction for the determination of mercury in water samples. In-situ IL-DLLME is based on a simple metathesis reaction between a water-miscible IL and a salt to form a water-immiscible IL into sample solution. Mercury complex with ammonium pyrrolidinedithiocarbamate is extracted from sample solution into the water-immiscible IL formed in-situ. Then, an ultrasound-assisted procedure is employed to back-extract the mercury into 10 µL of a 4 M HCl aqueous solution, which is finally analyzed using SPCnAuEs. Sample preparation methodology was optimized using a multivariate optimization strategy. Under optimized conditions, a linear range between 0.5 and 10 µg L−1 was obtained with a correlation coefficient of 0.997 for six calibration points. The limit of detection obtained was 0.2 µg L−1, which is lower than the threshold value established by the Environmental Protection Agency and European Union (i.e., 2 µg L−1 and 1 µg L−1, respectively). The repeatability of the proposed method was evaluated at two different spiking levels (3 and 10 µg L−1) and a coefficient of variation of 13% was obtained in both cases. The performance of the proposed methodology was evaluated in real-world water samples including tap water, bottled water, river water and industrial wastewater. Relative recoveries between 95% and 108% were obtained.
Resumo:
This article describes the research carried out regarding the application of cathodic protection (CP) and cathodic prevention (CPrev), in some cases with a pre-treatment of electrochemical chloride extraction (ECE), on representative specimens of reinforced concrete structures, using an anodic system consisting of a graphite-cement paste applied as a coating on the surface. The aim of this research is to find out the competence of this anode for the aforementioned electrochemical treatments. The efficiency of this anode has been clearly demonstrated, as well as its capability to apply a combined process of ECE and after CP.