930 resultados para extraction methods
Resumo:
The population of space debris increased drastically during the last years. These objects have become a great threat for active satellites. Because the relative velocities between space debris and satellites are high, space debris objects may destroy active satellites through collisions. Furthermore, collisions involving massive objects produce large number of fragments leading to significant growth of the space debris population. The long term evolution of the debris population is essentially driven by so-called catastrophic collisions. An effective remediation measure in order to stabilize the population in Low Earth Orbit (LEO) is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period, spin axis orientation and their change over time. Rotating objects will produce periodic brightness variations with frequencies which are related to the spin periods. Such a brightness variation over time is called a light curve. Collecting, but also processing light curves is challenging due to several reasons. Light curves may be undersampled, low frequency components due to phase angle and atmospheric extinction changes may be present, and beat frequencies may occur when the rotation period is close to a multiple of the sampling period. Depending on the method which is used to extract the frequencies, also method-specific properties have to be taken into account. The astronomical Institute of the University of Bern (AIUB) light curve database will be introduced, which contains more than 1,300 light curves acquired over more than seven years. We will discuss properties and reliability of different time series analysis methods tested and currently used by AIUB for the light curve processing. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for the period confirmation, will be presented. Finally we will present the reconstructed phase and its evolution over time of a High-Area-to-Mass-Ratio (HAMR) object, which AIUB observed for several years.
Resumo:
PURPOSE The aim of this work was to study the peri-implant soft tissues response, by evaluating both the recession and the papilla indexes, of patients treated with implants with two different configurations. In addition, data were stratified by tooth category, smoking habit and thickness of buccal bone wall. MATERIALS AND METHODS The clinical trial was designed as a prospective, randomized-controlled multicenter study. Adults in need of one or more implants replacing teeth to be removed in the maxilla within the region 15-25 were recruited. Following tooth extraction, the site was randomly allocated to receive either a cylindrical or conical/cylindrical implant. The following parameters were studied: (i) Soft tissue recession (REC) measured by comparing the gingival zenith (GZ) score at baseline (permanent restoration) with that of the yearly follow-up visits over a period of 3 years (V1, V2 and V3). (ii) Interdental Papilla Index (PI): PI measurements were performed at baseline and compared with that of the follow-up visits. In addition, data were stratified by different variables: tooth category: anterior (incisors and canine) and posterior (first and second premolar); smoking habit: patient smoker (habitual or occasional smoker at inclusion) or non-smoker (non-smoker or ex-smoker at inclusion) and thickness of buccal bone wall (TB): TB ≤ 1 mm (thin buccal wall) or TB > 1 mm (thick buccal wall). RESULTS A total of 93 patients were treated with 93 implants. At the surgical re-entry one implant was mobile and then removed; moreover, one patient was lost to follow-up. Ninety-one patients were restored with 91 implant-supported permanent single crowns. After the 3-year follow-up, a mean gain of 0.23 mm of GZ was measured; moreover, 79% and 72% of mesial and distal papillae were classified as >50%/ complete, respectively. From the stratification analysis, not significant differences were found between the mean GZ scores of implants with TB ≤ 1 mm (thin buccal wall) and TB > 1 mm (thick buccal wall), respectively (P < 0.05, Mann-Whitney U-test) at baseline, at V1, V2 and V3 follow-up visits. Also, the other variables did not seem to influence GZ changes over the follow-up period. Moreover, a re-growth of the interproximal mesial and distal papillae was the general trend observed independently from the variables studied. CONCLUSIONS Immediate single implant treatment may be considered a predictable option regarding soft tissue stability over a period of 3 years of follow-up. An overall buccal soft tissue stability was observed during the GZ changes from the baseline to the 3 years of follow-up with a mean GZ reduction of 0.23 mm. A nearly full papillary re-growth can be detectable over a minimum period of 2 years of follow-up for both cylindrical and conical/cylindrical implants. Both the interproximal papilla filling and the midfacial mucosa stability were not influenced by variables such as type of fixture configuration, tooth category, smoke habit, and thickness of buccal bone wall of ≤ 1 mm (thin buccal wall).
Resumo:
OBJECTIVE: To describe (1) preoperative findings and surgical technique, (2) intraoperative difficulties, and (3) postoperative complications and long-term outcome of equine cheek tooth extraction using a minimally invasive transbuccal screw extraction (MITSE) technique. STUDY DESIGN: Retrospective case series. ANIMALS: Fifty-four equids; 50 horses, 3 ponies, and 1 mule. METHODS: Fifty-eight MITSE procedures were performed to extract cheek teeth in 54 equids. Peri- and intraoperative difficulties, as well as short- (<1 month) and long-term (>6 months) postoperative complications were recorded. Followup information was obtained through telephone interviews, making specific inquiries about nasal discharge, facial asymmetry, and findings consistent with surgical site infection. RESULTS: Preoperative findings that prompted exodontia included 50 cheek teeth with apical infections, 48 fractures, 4 neoplasia, 2 displacements, and 1 supernumerary tooth. Previous oral extraction was attempted but had failed in 55/58 (95%) animals because of cheek tooth fracture in 28, or insufficient clinical crown for extraction with forceps in 27. MITSE was successful in removing the entire targeted dental structure in 47/58 (81%) procedures. However, MITSE failed to remove the entire targeted dental structure in 11/58 (19%) procedures and was followed by repulsion in 10/11 (91%). Short-term postoperative complications included bleeding (4/58 procedures, 7%) and transient facial nerve paralysis (4/58 procedures, 7%). Owners were satisfied with the functional and cosmetic outcome for 40/41 (98%) animals with followup. CONCLUSION: MITSE offers an alternate for cheek tooth extraction in equids, where conventional oral extraction is not possible or has failed. Overall, there was low morbidity, which compares favorably with invasive buccotomy or repulsion techniques
Resumo:
OBJECTIVE To assess the maxillary second molar (M2) and third molar (M3) inclination following orthodontic treatment of Class II subdivision malocclusion with unilateral maxillary first molar (M1) extraction. MATERIALS AND METHODS Panoramic radiographs of 21 Class II subdivision adolescents (eight boys, 13 girls; mean age, 12.8 years; standard deviation, 1.7 years) before treatment, after treatment with extraction of one maxillary first molar and Begg appliances and after at least 1.8 years in retention were retrospectively collected from a private practice. M2 and M3 inclination angles (M2/ITP, M2/IOP, M3/ITP, M3/IOP), constructed by intertuberosity (ITP) and interorbital planes (IOP), were calculated for the extracted and nonextracted segments. Random effects regression analysis was performed to evaluate the effect on the molar angulation of extraction, time, and gender after adjusting for baseline measurements. RESULTS Time and extraction status were significant predictors for M2 angulation. M2/ITP and M2/IOP decreased by 4.04 (95% confidence interval [CI]: -6.93, 1.16; P = .001) and 3.67 (95% CI: -6.76, -0.58; P = .020) in the extraction group compared to the nonextraction group after adjusting for time and gender. The adjusted analysis showed that extraction was the only predictor for M3 angulation that reached statistical significance. M3 mesial inclination increased by 7.38° (95% CI: -11.2, -3.54; P < .001) and 7.33° (95% CI: -11.48, -3.19; P = .001). CONCLUSIONS M2 and M3 uprighting significantly improved in the extraction side after orthodontic treatment with unilateral maxillary M1 extraction. There was a significant increase in mesial tipping of maxillary second molar crowns over time.
Resumo:
OBJECTIVE To evaluate the long-term effects of asymmetrical maxillary first molar (M1) extraction in Class II subdivision treatment. MATERIALS AND METHODS Records of 20 Class II subdivision whites (7 boys, 13 girls; mean age, 13.0 years; SD, 1.7 years) consecutively treated with the Begg technique and M1 extraction, and 15 untreated asymmetrical Class II adolescents (4 boys, 11 girls; mean age, 12.2 years; SD, 1.3 years) were examined in this study. Cephalometric analysis and PAR assessment were carried out before treatment (T1), after treatment (T2), and on average 2.5 years posttreatment (T3) for the treatment group, and at similar time points and average follow-up of 1.8 years for the controls. RESULTS The adjusted analysis indicated that the maxillary incisors were 2.3 mm more retracted in relation to A-Pog between T1 and T3 (β = 2.31; 95% CI; 0.76, 3.87), whereas the mandibular incisors were 1.3 mm more protracted (β = 1.34; 95% CI; 0.09, 2.59), and 5.9° more proclined to the mandibular plane (β = 5.92; 95% CI; 1.43, 10.41) compared with controls. The lower lip appeared 1.4 mm more protrusive relative to the subnasale-soft tissue-Pog line throughout the observation period in the treated adolescents (β = 1.43; 95% CI; 0.18, 2.67). There was a significant PAR score reduction over the entire follow-up period in the molar extraction group (β = -6.73; 95% CI; -10.7, -2.7). At T2, 65% of the subjects had maxillary midlines perfectly aligned with the face. CONCLUSIONS Unilateral M1 extraction in asymmetrical Class II cases may lead to favorable occlusal outcomes in the long term without harming the midline esthetics and soft tissue profile.
Resumo:
AIM To identify the ideal timing of first permanent molar extraction to reduce the future need for orthodontic treatment. MATERIALS AND METHODS A computerised database and subsequent manual search was performed using Medline database, Embase and Ovid, covering the period from January 1946 to February 2013. Two reviewers (JE and ME) extracted the data independently and evaluated if the studies matched the inclusion criteria. Inclusion criteria were specification of the follow-up with clinical examination or analysis of models, specification of the chronological age or dental developmental stage at the time of extraction, no treatment in between, classification of the treatment result into perfect, good, average and poor. The search was limited to human studies and no language limitations were set. RESULTS The search strategy resulted in 18 full-text articles, of which 6 met the inclusion criteria. By pooling the data from maxillary sites, good to perfect clinical outcome was estimated in 72% (95% confidence interval 63%-82%). Extractions at the age of 8-10.5 years tended to show better spontaneous clinical outcomes compared to the other age groups. By pooling the data from mandibular sites, extractions performed at the age of 8-10.5 and 10.5-11.5 years showed significantly superior spontaneous clinical outcome with a probability of 50% and 59% likelihood, respectively, to achieve good to perfect clinical result (p<0.05) compared to the other age groups (<8 years of age: 34%, >11.5 years of age: 44%). CONCLUSION Prevention of complications after first permanent molars extractions is an important issue. The overall success rate of spontaneous clinical outcome for maxillary extraction of first permanent molars was superior to mandibular extraction. Extractions of mandibular first permanent molars should be performed between 8 and 11.5 years of age in order to achieve a good spontaneous clinical outcome. For the extraction in the maxilla, no firm conclusions concerning the ideal extraction timing could be drawn.
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
Sorption of volatile hydrocarbon gases (VHCs) to marine sediments is a recognized phenomenon that has been investigated in the context of petroleum exploration. However, little is known about the biogeochemistry of sorbed methane and higher VHCs in environments that are not influenced by thermogenic processes. This study evaluated two different extraction protocols for sorbed VHCs, used high pressure equipment to investigate the sorption of methane to pure clay mineral phases, and conducted a geochemical and mineralogical survey of sediment samples from different oceanographic settings and geochemical regimes that are not significantly influenced by thermogenic gas. Extraction of sediments under alkaline conditions yielded higher concentrations of sorbed methane than the established protocol for acidic extraction. Application of alkaline extraction in the environmental survey revealed the presence of substantial amounts of sorbed methane in 374 out of 411 samples (91%). Particularly high amounts, up to 2.1 mmol kg**-1 dry sediment, were recovered from methanogenic sediments. Carbon isotopic compositions of sorbed methane suggested substantial contributions from biogenic sources, both in sulfate-depleted and sulfate-reducing sediments. Carbon isotopic relationships between sorbed and dissolved methane indicate a coupling of the two pools. While our sorption experiments and extraction conditions point to an important role for clay minerals as sorbents, mineralogical analyses of marine sediments suggest that variations in mineral composition are not controlling variations in quantities of sorbed methane. We conclude that the distribution of sorbed methane in sediments is strongly influenced by in situ production.
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
The focus of this chapter is to study feature extraction and pattern classification methods from two medical areas, Stabilometry and Electroencephalography (EEG). Stabilometry is the branch of medicine responsible for examining balance in human beings. Balance and dizziness disorders are probably two of the most common illnesses that physicians have to deal with. In Stabilometry, the key nuggets of information in a time series signal are concentrated within definite time periods are known as events. In this chapter, two feature extraction schemes have been developed to identify and characterise the events in Stabilometry and EEG signals. Based on these extracted features, an Adaptive Fuzzy Inference Neural network has been applied for classification of Stabilometry and EEG signals.
Resumo:
A number of thrombectomy devices using a variety of methods have now been developed to facilitate clot removal. We present research involving one such experimental device recently developed in the UK, called a ‘GP’ Thrombus Aspiration Device (GPTAD). This device has the potential to bring about the extraction of a thrombus. Although the device is at a relatively early stage of development, the results look encouraging. In this work, we present an analysis and modeling of the GPTAD by means of the bond graph technique; it seems to be a highly effective method of simulating the device under a variety of conditions. Such modeling is useful in optimizing the GPTAD and predicting the result of clot extraction. The aim of this simulation model is to obtain the minimum pressure necessary to extract the clot and to verify that both the pressure and the time required to complete the clot extraction are realistic for use in clinical situations, and are consistent with any experimentally obtained data. We therefore consider aspects of rheology and mechanics in our modeling.
Resumo:
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Resumo:
Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.
Resumo:
Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.