983 resultados para Extraction techniques
Resumo:
The work presented in this thesis is focused on the open-ended coaxial-probe frequency-domain reflectometry technique for complex permittivity measurement at microwave frequencies of dispersive dielectric multilayer materials. An effective dielectric model is introduced and validated to extend the applicability of this technique to multilayer materials in on-line system context. In addition, the thesis presents: 1) a numerical study regarding the imperfectness of the contact at the probe-material interface, 2) a review of the available models and techniques, 3) a new classification of the extraction schemes with guidelines on how they can be used to improve the overall performance of the probe according to the problem requirements.
Resumo:
Over the past ten years, the cross-correlation of long-time series of ambient seismic noise (ASN) has been widely adopted to extract the surface-wave part of the Green’s Functions (GF). This stochastic procedure relies on the assumption that ASN wave-field is diffuse and stationary. At frequencies <1Hz, the ASN is mainly composed by surface-waves, whose origin is attributed to the sea-wave climate. Consequently, marked directional properties may be observed, which call for accurate investigation about location and temporal evolution of the ASN-sources before attempting any GF retrieval. Within this general context, this thesis is aimed at a thorough investigation about feasibility and robustness of the noise-based methods toward the imaging of complex geological structures at the local (∼10-50km) scale. The study focused on the analysis of an extended (11 months) seismological data set collected at the Larderello-Travale geothermal field (Italy), an area for which the underground geological structures are well-constrained thanks to decades of geothermal exploration. Focusing on the secondary microseism band (SM;f>0.1Hz), I first investigate the spectral features and the kinematic properties of the noise wavefield using beamforming analysis, highlighting a marked variability with time and frequency. For the 0.1-0.3Hz frequency band and during Spring- Summer-time, the SMs waves propagate with high apparent velocities and from well-defined directions, likely associated with ocean-storms in the south- ern hemisphere. Conversely, at frequencies >0.3Hz the distribution of back- azimuths is more scattered, thus indicating that this frequency-band is the most appropriate for the application of stochastic techniques. For this latter frequency interval, I tested two correlation-based methods, acting in the time (NCF) and frequency (modified-SPAC) domains, respectively yielding esti- mates of the group- and phase-velocity dispersions. Velocity data provided by the two methods are markedly discordant; comparison with independent geological and geophysical constraints suggests that NCF results are more robust and reliable.
Resumo:
We propose a novel methodology to generate realistic network flow traces to enable systematic evaluation of network monitoring systems in various traffic conditions. Our technique uses a graph-based approach to model the communication structure observed in real-world traces and to extract traffic templates. By combining extracted and user-defined traffic templates, realistic network flow traces that comprise normal traffic and customized conditions are generated in a scalable manner. A proof-of-concept implementation demonstrates the utility and simplicity of our method to produce a variety of evaluation scenarios. We show that the extraction of templates from real-world traffic leads to a manageable number of templates that still enable accurate re-creation of the original communication properties on the network flow level.
Resumo:
Supercritical carbon dioxide is used to exfoliate graphite, producing a small, several-layer graphitic flake. The supercritical conditions of 2000, 2500, and 3000 psi and temperatures of 40°, 50°, and 60°C, have been used to study the effect of critical density on the sizes and zeta potentials of the treated flakes. Photon Correlation Spectroscopy (PCS), Brunauer-Emmett-Teller (BET) surface area measurement, field emission scanning electron microscopy (FE-SEM), and atomic force microscopy (AFM) are used to observe the features of the flakes. N-methyl-2-pyrrolidinone (NMP), dimethylformamide (DMF), and isopropanol are used as co-solvents to enhance the supercritical carbon dioxide treatment. As a result, the PCS results show that the flakes obtained from high critical density treatment (low temperature and high pressure) are more stable due to more negative charges of zeta potential, but have smaller sizes than those from low critical density (high temperature and low pressure). However, when an additional 1-hour sonication is applied, the size of the flakes from low critical density treatment becomes smaller than those from high critical density treatment. This is probably due to more CO2 molecules stacked between the layers of the graphitic flakes. The zeta potentials of the sonicated samples were slightly more negative than nonsonicated samples. NMP and DMF co-solvents maintain stability and prevented reaggregation of the flakes better than isopropanol. The flakes tend to be larger and more stable as the treatment time increases since larger flat area of graphite is exfoliated. In these experiments, the temperature has more impact on the flakes than pressure. The BET surface area resultsshow that CO2 penetrates the graphite layers more than N2. Moreover, the negative surface area of the treated graphite indicates that the CO2 molecules may be adsorbed between the graphite layers during supercritical treatment. The FE-SEM and AFM images show that the flakes have various shapes and sizes. The effects of surfactants can be observed on the FE-SEM images of the samples in one percent by weight solution of SDBS in water since the sodium dodecylbenzene sulfonate (SDBS) residue covers all of the remaining flakes. The AFM images show that the vertical thickness of the graphitic flakes can ranges from several nanometers (less than ten layers thick), to more than a hundred nanometers. In conclusion, supercritical carbon dioxide treatment is a promising step compared to mechanical and chemical exfoliation techniques in the large scale production of thin graphitic flake, breaking down the graphite flakes into flakes only a fewer graphene layers thick.
Resumo:
In this thesis, I study skin lesion detection and its applications to skin cancer diagnosis. A skin lesion detection algorithm is proposed. The proposed algorithm is based color information and threshold. For the proposed algorithm, several color spaces are studied and the detection results are compared. Experimental results show that YUV color space can achieve the best performance. Besides, I develop a distance histogram based threshold selection method and the method is proven to be better than other adaptive threshold selection methods for color detection. Besides the detection algorithms, I also investigate GPU speed-up techniques for skin lesion extraction and the results show that GPU has potential applications in speeding-up skin lesion extraction. Based on the skin lesion detection algorithms proposed, I developed a mobile-based skin cancer diagnosis application. In this application, the user with an iPhone installed with the proposed application can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons' skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Resumo:
Randomised controlled trials (RCTs) of psychotherapeutic interventions assume that specific techniques are used in treatments, which are responsible for changes in the client's symptoms. This assumption also holds true for meta-analyses, where evidence for specific interventions and techniques is compiled. However, it has also been argued that different treatments share important techniques and that an upcoming consensus about useful treatment strategies is leading to a greater integration of treatments. This makes assumptions about the effectiveness of specific interventions ingredients questionable if the shared (common) techniques are more often used in interventions than are the unique techniques. This study investigated the unique or shared techniques in RCTs of cognitive-behavioural therapy (CBT) and short-term psychodynamic psychotherapy (STPP). Psychotherapeutic techniques were coded from 42 masked treatment descriptions of RCTs in the field of depression (1979-2010). CBT techniques were often used in studies identified as either CBT or STPP. However, STPP techniques were only used in STPP-identified studies. Empirical clustering of treatment descriptions did not confirm the original distinction of CBT versus STPP, but instead showed substantial heterogeneity within both approaches. Extraction of psychotherapeutic techniques from the treatment descriptions is feasible and could be used as a content-based approach to classify treatments in systematic reviews and meta-analyses.
Resumo:
OBJECTIVE: To describe (1) preoperative findings and surgical technique, (2) intraoperative difficulties, and (3) postoperative complications and long-term outcome of equine cheek tooth extraction using a minimally invasive transbuccal screw extraction (MITSE) technique. STUDY DESIGN: Retrospective case series. ANIMALS: Fifty-four equids; 50 horses, 3 ponies, and 1 mule. METHODS: Fifty-eight MITSE procedures were performed to extract cheek teeth in 54 equids. Peri- and intraoperative difficulties, as well as short- (<1 month) and long-term (>6 months) postoperative complications were recorded. Followup information was obtained through telephone interviews, making specific inquiries about nasal discharge, facial asymmetry, and findings consistent with surgical site infection. RESULTS: Preoperative findings that prompted exodontia included 50 cheek teeth with apical infections, 48 fractures, 4 neoplasia, 2 displacements, and 1 supernumerary tooth. Previous oral extraction was attempted but had failed in 55/58 (95%) animals because of cheek tooth fracture in 28, or insufficient clinical crown for extraction with forceps in 27. MITSE was successful in removing the entire targeted dental structure in 47/58 (81%) procedures. However, MITSE failed to remove the entire targeted dental structure in 11/58 (19%) procedures and was followed by repulsion in 10/11 (91%). Short-term postoperative complications included bleeding (4/58 procedures, 7%) and transient facial nerve paralysis (4/58 procedures, 7%). Owners were satisfied with the functional and cosmetic outcome for 40/41 (98%) animals with followup. CONCLUSION: MITSE offers an alternate for cheek tooth extraction in equids, where conventional oral extraction is not possible or has failed. Overall, there was low morbidity, which compares favorably with invasive buccotomy or repulsion techniques
Resumo:
This article describes the work performed over the database of questions belonging to the different opinion polls carried during the last 50 years in Spain. Approximately half of the questions are provided with a title while the other half remain untitled. The work and implemented techniques in order to automatically generate the titles for untitled questions are described. This process is performed over very short texts and generated titles are subject to strong stylistic conventions and should be fully grammatical pieces of Spanish
Resumo:
Purpose: In this work, we present the analysis, design and optimization of one experimental device recently developed in the UK, called the 'GP' Thrombus Aspiration Device (GPTAD). This device has been designed to remove blood clots without the need to make contact with the clot itself thereby potentially reducing the risk of problems such as downstream embolisation. Method: To obtain the minimum pressure necessary to extract the clot and to optimize the device, we have simulated the performance of the GPTAD analysing the resistances, compliances and inertances effects. We model a range of diameters for the GPTAD considering different forces of adhesion of the blood clot to the artery wall, and different lengths of blood clot. In each case we determine the optimum pressure required to extract the blood clot from the artery using the GPTAD, which is attached at its proximal end to a suction pump. Result: We then compare the results of our mathematical modelling to measurements made in laboratory using plastic tube models of arteries of comparable diameter. We use abattoir porcine blood clots that are extracted using the GPTAD. The suction pressures required for such clot extraction in the plastic tube models compare favourably with those predicted by the mathematical modelling. Discussion & Conclusion: We conclude therefore that the mathematical modelling is a useful technique in predicting the performance of the GPTAD and may potentially be used in optimising the design of the device.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
The study of the effectiveness of the cognitive rehabilitation processes and the identification of cognitive profiles, in order to define comparable populations, is a controversial area, but concurrently it is strongly needed in order to improve therapies. There is limited evidence about cognitive rehabilitation efficacy. Many of the trials conclude that in spite of an apparent clinical good response, differences do not show statistical significance. The common feature in all these trials is heterogeneity among populations. In this situation, observational studies on very well controlled cohort of studies, together with innovative methods in knowledge extraction, could provide methodological insights for the design of more accurate comparative trials. Some correlation studies between neuropsychological tests and patients capacities have been carried out -1---2- and also correlation between tests and morphological changes in the brain -3-. The procedures efficacy depends on three main factors: the affectation profile, the scheduled tasks and the execution results. The relationship between them makes up the cognitive rehabilitation as a discipline, but its structure is not properly defined. In this work we present a clustering method used in Neuro Personal Trainer (NPT) to group patients into cognitive profiles using data mining techniques. The system uses these clusters to personalize treatments, using the patients assigned cluster to select which tasks are more suitable for its concrete needs, by comparing the results obtained in the past by patients with the same profile.
Resumo:
Durante el proceso de producción de voz, los factores anatómicos, fisiológicos o psicosociales del individuo modifican los órganos resonadores, imprimiendo en la voz características particulares. Los sistemas ASR tratan de encontrar los matices característicos de una voz y asociarlos a un individuo o grupo. La edad y sexo de un hablante son factores intrínsecos que están presentes en la voz. Este trabajo intenta diferenciar esas características, aislarlas y usarlas para detectar el género y la edad de un hablante. Para dicho fin, se ha realizado el estudio y análisis de las características basadas en el pulso glótico y el tracto vocal, evitando usar técnicas clásicas (como pitch y sus derivados) debido a las restricciones propias de dichas técnicas. Los resultados finales de nuestro estudio alcanzan casi un 100% en reconocimiento de género mientras en la tarea de reconocimiento de edad el reconocimiento se encuentra alrededor del 80%. Parece ser que la voz queda afectada por el género del hablante y las hormonas, aunque no se aprecie en la audición. ABSTRACT Particular elements of the voice are printed during the speech production process and are related to anatomical and physiological factors of the phonatory system or psychosocial factors acquired by the speaker. ASR systems attempt to find those peculiar nuances of a voice and associate them to an individual or a group. Age and gender are inherent factors to the speaker which may be represented in voice. This work attempts to differentiate those characteristics, isolate them and use them to detect speaker’s gender and age. Features based on glottal pulse and vocal tract are studied and analyzed in order to achieve good results in both tasks. Classical methodologies (such as pitch and derivates) are avoided since the requirements of those techniques may be too restrictive. The final scores achieve almost 100% in gender recognition whereas in age recognition those scores are around 80%. Factors related to the gender and hormones seem to affect the voice although they are not audible.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.