990 resultados para EXTRACTION PATTERNS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thirty-six Madeira wine samples from Boal, Malvazia, Sercial and Verdelho white grape varieties were analyzed in order to estimate the free fraction of monoterpenols and C13 norisoprenoids (terpenoid compounds) using dynamic headspace solid phase micro-extraction (HS-SPME) technique coupled with gas chromatography–mass spectrometry (GC–MS). The average values from three vintages (1998–2000) show that these wines have characteristic profiles of terpenoid compounds. Malvazia wines exhibits the highest values of total free monoterpenols, contrary to Verdelho wines which had the lowest levels of terpenoids but produced the highest concentration of farnesol. The use of multivariate analysis techniques allows establishing relations between the compounds and the varieties under investigation. Principal component analysis (PCA) and linear discriminant analysis (LDA) were applied to the obtained matrix data. A good separation and classification power between the four groups as a function of their varietal origin was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A suitable analytical procedure based on static headspace solid-phase microextraction (SPME) followed by thermal desorption gas chromatography–ion trap mass spectrometry detection (GC–ITDMS), was developed and applied for the qualitative and semi-quantitative analysis of volatile components of Portuguese Terras Madeirenses red wines. The headspace SPME method was optimised in terms of fibre coating, extraction time, and extraction temperature. The performance of three commercially available SPME fibres, viz. 100 lm polydimethylsiloxane; 85 lm polyacrylate, PA; and 50/30 lm divinylbenzene/carboxen on polydimethylsiloxane, was evaluated and compared. The highest amounts extracted, in terms of the maximum signal recorded for the total volatile composition, were obtained with a PA coating fibre at 308C during an extraction time of 60 min with a constant stirring at 750 rpm, after saturation of the sample with NaCl (30%, w/v). More than sixty volatile compounds, belonging to different biosynthetic pathways, have been identified, including fatty acid ethyl esters, higher alcohols, fatty acids, higher alcohol acetates, isoamyl esters, carbonyl compounds, and monoterpenols/C13-norisoprenoids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Allergicasthmarepresentsanimportantpublichealthissuewithsignificantgrowthovertheyears,especially in the paediatric population. Exhaled breath is a non-invasive, easily performed and rapid method forobtainingsamplesfromthelowerrespiratorytract.Inthepresentmanuscript,themetabolicvolatile profiles of allergic asthma and control children were evaluated by headspace solid-phase microextraction combined with gas chromatography–quadrupole mass spectrometry (HS-SPME/GC–qMS). The lack ofstudiesinbreathofallergicasthmaticchildrenbyHS-SPMEledtothedevelopmentofanexperimental design to optimize SPME parameters. To fulfil this objective, three important HS-SPME experimental parameters that influence the extraction efficiency, namely fibre coating, temperature and time extractions were considered. The selected conditions that promoted higher extraction efficiency corresponding to the higher GC peak areas and number of compounds were: DVB/CAR/PDMS coating fibre, 22◦C and 60min as the extraction temperature and time, respectively. The suitability of two containers, 1L Tedlar® bags and BIOVOC®, for breath collection and intra-individual variability were also investigated. The developed methodology was then applied to the analysis of children exhaled breath with allergicasthma(35),fromwhich13hadalsoallergicrhinitis,andhealthycontrolchildren(15),allowing to identify 44 volatiles distributed over the chemical families of alkanes (linear and ramified) ketones, aromatic hydrocarbons, aldehydes, acids, among others. Multivariate studies were performed by Partial LeastSquares–DiscriminantAnalysis(PLS–DA)usingasetof28selectedmetabolitesanddiscrimination between allergic asthma and control children was attained with a classification rate of 88%. The allergic asthma paediatric population was characterized mainly by the compounds linked to oxidative stress, such as alkanes and aldehydes. Furthermore, more detailed information was achieved combining the volatile metabolic data, suggested by PLS–DA model, and clinical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most face recognition approaches require a prior training where a given distribution of faces is assumed to further predict the identity of test faces. Such an approach may experience difficulty in identifying faces belonging to distributions different from the one provided during the training. A face recognition technique that performs well regardless of training is, therefore, interesting to consider as a basis of more sophisticated methods. In this work, the Census Transform is applied to describe the faces. Based on a scanning window which extracts local histograms of Census Features, we present a method that directly matches face samples. With this simple technique, 97.2% of the faces in the FERET fa/fb test were correctly recognized. Despite being an easy test set, we have found no other approaches in literature regarding straight comparisons of faces with such a performance. Also, a window for further improvement is presented. Among other techniques, we demonstrate how the use of SVMs over the Census Histogram representation can increase the recognition performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ObjectivesTo evaluate the influence of implant size and configuration on osseointegration in implants immediately placed into extraction sockets.Material and methodsImplants were installed immediately into extraction sockets in the mandibles of six Labrador dogs. In the control sites, cylindrical transmucosal implants (3.3 mm diameter) were installed, while in the test sites, larger and conical (root formed, 5 mm diameter) implants were installed. After 4 months of healing, the resorptive patterns of the alveolar crest were evaluated histomorphometrically.ResultsWith one exception, all implants were integrated in mineralized bone, mainly composed of mature lamellar bone. The alveolar crest underwent resorption at the control as well as at the test implants. This resorption was more pronounced at the buccal aspects and significantly greater at the test (2.7 +/- 0.4 mm) than at the control implants (1.5 +/- 0.6 mm). However, the control implants were associated with residual defects that were deeper at the lingual than at the buccal aspects, while these defects were virtually absent at test implants.ConclusionsThe installment of root formed wide implants immediately into extraction sockets will not prevent the resorption of the alveolar crest. In contrast, this resorption is more marked both at the buccal and lingual aspects of root formed wide than at standard cylindrical implants.To cite this article:Caneva M, Salata LA, de Souza SS, Bressan E, Botticelli D, Lang NP. Hard tissue formation adjacent to implants of various size and configuration immediately placed into extraction sockets: an experimental study in dogs.Clin. Oral Impl. Res. 21, 2010; 885-895.doi: 10.1111/j.1600-0501.2010.01931.x.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To evaluate the influence of implant positioning into extraction sockets on osseointegration. Material and methods: Implants were installed immediately into extraction sockets in the mandibles of six Labrador dogs. In the control sites, the implants were positioned in the center of the alveolus, while in the test sites, the implants were positioned 0.8 mm deeper and more lingually. After 4 months of healing, the resorptive patterns of the alveolar crest were evaluated histomorphometrically. Results: All implants were integrated in mineralized bone, mainly composed of mature lamellar bone. The alveolar crest underwent resorption at the control as well as at the test sites. After 4 months of healing, at the buccal aspects of the control and test sites, the location of the implant rough/smooth limit to the alveolar crest was 2±0.9 mm and 0.6±0.9 mm, respectively (P<0.05). At the lingual aspect, the bony crest was located 0.4 mm apically and 0.2 mm coronally to the implant rough/smooth limit at the control and test sites, respectively (NS). Conclusions: From a clinical point of view, implants installed into extraction sockets should be positioned approximately 1 mm deeper than the level of the buccal alveolar crest and in a lingual position in relation to the center of the alveolus in order to reduce or eliminate the exposure above the alveolar crest of the endosseous (rough) portion of the implant. © 2009 John Wiley & Sons A/S.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Testicular sperm extraction (TESE) associated with intracytoplasmic sperm injection has allowed many men presenting non-obstructive azoospermia to achieve fatherhood. Microdissection TESE (microTESE) was proposed as a method to improve sperm retrieval rates in these patients; however, there have been failures. Little is known about whether microTESE leads to spermatogenic alterations in the contralateral testis. We assessed histological outcomes of experimental microTESE in the contralateral testis of adult male rabbits. Nine adult male rabbits were divided into three groups: control (testicular biopsy to observe normal histological and morphometric values), sham (incision of the tunica vaginalis, and a contralateral testicular biopsy to observe histological and morphometric patterns, 45 days later), and study (left testicular microTESE, and a right testicular biopsy to observe histological and morphometric patterns, 45 days later). Sections were assessed by calculating Johnsen-like scores, and measuring total tubule diameter, lumen diameter and epithelial height. The results were compared using ANOVA and Bonferroni's statistical analysis. Morphometric evaluation of the seminiferous tubules did not demonstrate differences between the three groups. However, microTESE caused spermatogenic alterations, leading to maturation arrest in the contralateral testis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ObjectiveTo compare the sequential healing at immediately loaded implants installed in a healed alveolar bony ridge or immediately after tooth extraction.Material and methodsIn the mandible of 12 dogs, the second premolars were extracted. After 3months, the mesial roots of the third premolars were endodontically treated and the distal roots extracted. Implants were placed immediately into the extraction sockets (test) and in the second premolar region (control). Crowns were applied at the second and third maxillary premolars, and healing abutments of appropriate length were applied at both implants placed in the mandible and adapted to allow occlusal contacts with the crowns in the maxilla. The time of surgery and time of sacrifices were planned in such a way to obtain biopsies representing the healing after 1 and 2weeks and 1 and 3months. Ground sections were prepared for histological analyses.ResultsAt the control sites, a resorption of the buccal bone of 1mm was found after 1week and remained stable thereafter. At the test sites, the resorption was 0.4mm at 1-week period and further loss was observed after 1month. The height of the peri-implant soft tissue was 3.8mm both at test and control sites. Higher values of mineralized bone-to-implant contact and bone density were seen at the controls compared with the test sites. The differences, however, were not statistically significant.ConclusionsDifferent patterns of sequential early healing were found at implants installed in healed alveolar bone or in alveolar sockets immediately after tooth extractions. However, three months after implant installation, no statistically significant differences were found for the hard- and soft-tissue dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tribocharged polymers display macroscopically patterned positive and negative domains, verifying the fractal geometry of electrostatic mosaics previously detected by electric probe microscopy. Excess charge on contacting polyethylene (PE) and polytetrafluoroethylene (PTFE) follows the triboelectric series but with one caveat: net charge is the arithmetic sum of patterned positive and negative charges, as opposed to the usual assumption of uniform but opposite signal charging on each surface. Extraction with n-hexane preferentially removes positive charges from PTFE, while 1,1-difluoroethane and ethanol largely remove both positive and negative charges. Using suitable analytical techniques (electron energy-loss spectral imaging, infrared microspectrophotometry and carbonization/colorimetry) and theoretical calculations, the positive species were identified as hydrocarbocations and the negative species were identified as fluorocarbanions. A comprehensive model is presented for PTFE tribocharging with PE: mechanochemical chain homolytic rupture is followed by electron transfer from hydrocarbon free radicals to the more electronegative fluorocarbon radicals. Polymer ions self-assemble according to Flory-Huggins theory, thus forming the experimentally observed macroscopic patterns. These results show that tribocharging can only be understood by considering the complex chemical events triggered by mechanical action, coupled to well-established physicochemical concepts. Patterned polymers can be cut and mounted to make macroscopic electrets and multipoles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, a plethora of computer-aided diagnosis (CAD) systems have been proposed aiming to improve the accuracy of the physicians in the diagnosis of interstitial lung diseases (ILD). In this study, we propose a scheme for the classification of HRCT image patches with ILD abnormalities as a basic component towards the quantification of the various ILD patterns in the lung. The feature extraction method relies on local spectral analysis using a DCT-based filter bank. After convolving the image with the filter bank, q-quantiles are computed for describing the distribution of local frequencies that characterize image texture. Then, the gray-level histogram values of the original image are added forming the final feature vector. The classification of the already described patches is done by a random forest (RF) classifier. The experimental results prove the superior performance and efficiency of the proposed approach compared against the state-of-the-art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most existing approaches to Twitter sentiment analysis assume that sentiment is explicitly expressed through affective words. Nevertheless, sentiment is often implicitly expressed via latent semantic relations, patterns and dependencies among words in tweets. In this paper, we propose a novel approach that automatically captures patterns of words of similar contextual semantics and sentiment in tweets. Unlike previous work on sentiment pattern extraction, our proposed approach does not rely on external and fixed sets of syntactical templates/patterns, nor requires deep analyses of the syntactic structure of sentences in tweets. We evaluate our approach with tweet- and entity-level sentiment analysis tasks by using the extracted semantic patterns as classification features in both tasks. We use 9 Twitter datasets in our evaluation and compare the performance of our patterns against 6 state-of-the-art baselines. Results show that our patterns consistently outperform all other baselines on all datasets by 2.19% at the tweet-level and 7.5% at the entity-level in average F-measure.