919 resultados para Tools and techniques
Resumo:
Background and Purpose: Precise needle puncture of the kidney is a challenging and essential step for successful percutaneous nephrolithotomy (PCNL). Many devices and surgical techniques have been developed to easily achieve suitable renal access. This article presents a critical review to address the methodologies and techniques for conducting kidney targeting and the puncture step during PCNL. Based on this study, research paths are also provided for PCNL procedure improvement. Methods: Most relevant works concerning PCNL puncture were identified by a search of Medline/PubMed, ISI Web of Science, and Scopus databases from 2007 to December 2012. Two authors independently reviewed the studies. Results: A total of 911 abstracts and 346 full-text articles were assessed and discussed; 52 were included in this review as a summary of the main contributions to kidney targeting and puncturing. Conclusions: Multiple paths and technologic advances have been proposed in the field of urology and minimally invasive surgery to improve PCNL puncture. The most relevant contributions, however, have been provided by the applicationofmedical imaging guidance, newsurgical tools,motion tracking systems, robotics, andimage processing and computer graphics. Despite the multiple research paths for PCNL puncture guidance, no widely acceptable solution has yet been reached, and it remains an active and challenging research field. Future developments should focus on real-time methods, robust and accurate algorithms, and radiation free imaging techniques
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Conservação e Restauro, especialização em pintura sobre tela
Resumo:
As an introduction to a series of articles focused on the exploration of particular tools and/or methods to bring together digital technology and historical research, the aim of this paper is mainly to highlight and discuss in what measure those methodological approaches can contribute to improve analytical and interpretative capabilities available to historians. In a moment when the digital world present us with an ever-increasing variety of tools to perform extraction, analysis and visualization of large amounts of text, we thought it would be relevant to bring the digital closer to the vast historical academic community. More than repeating an idea of digital revolution introduced in the historical research, something recurring in the literature since the 1980s, the aim was to show the validity and usefulness of using digital tools and methods, as another set of highly relevant tools that the historians should consider. For this several case studies were used, combining the exploration of specific themes of historical knowledge and the development or discussion of digital methodologies, in order to highlight some changes and challenges that, in our opinion, are already affecting the historians' work, such as a greater focus given to interdisciplinarity and collaborative work, and a need for the form of communication of historical knowledge to become more interactive.
Resumo:
This paper examines academic reading difficulties Angolan second year ELT students have at ISCED (Instituto Superior de Ciências da Educação) in Benguela and focuses on a variety of reading strategies and techniques as well as models for reading materials to help improve academic reading skills. Finally, it recommends the use of appropriate reading strategies and techniques, materials, and the adoption of a more student-centred approach in teaching reading to encourage the development of a reading culture for academic purposes.
Resumo:
Nowadays, authentication studies for paintings require a multidisciplinary approach, based on the contribution of visual features analysis but also on characterizations of materials and techniques. Moreover, it is important that the assessment of the authorship of a painting is supported by technical studies of a selected number of original artworks that cover the entire career of an artist. This dissertation is concerned about the work of modernist painter Amadeo de Souza-Cardoso. It is divided in three parts. In the first part, we propose a tool based on image processing that combines information obtained by brushstroke and materials analysis. The resulting tool provides qualitative and quantitative evaluation of the authorship of the paintings; the quantitative element is particularly relevant, as it could be crucial in solving authorship controversies, such as judicial disputes. The brushstroke analysis was performed by combining two algorithms for feature detection, namely Gabor filter and Scale Invariant Feature Transform. Thanks to this combination (and to the use of the Bag-of-Features model), the proposed method shows an accuracy higher than 90% in distinguishing between images of Amadeo’s paintings and images of artworks by other contemporary artists. For the molecular analysis, we implemented a semi-automatic system that uses hyperspectral imaging and elemental analysis. The system provides as output an image that depicts the mapping of the pigments present, together with the areas made using materials not coherent with Amadeo’s palette, if any. This visual output is a simple and effective way of assessing the results of the system. The tool proposed based on the combination of brushstroke and molecular information was tested in twelve paintings obtaining promising results. The second part of the thesis presents a systematic study of four selected paintings made by Amadeo in 1917. Although untitled, three of these paintings are commonly known as BRUT, Entrada and Coty; they are considered as his most successful and genuine works. The materials and techniques of these artworks have never been studied before. The paintings were studied with a multi-analytical approach using micro-Energy Dispersive X-ray Fluorescence spectroscopy, micro-Infrared and Raman Spectroscopy, micro-Spectrofluorimetry and Scanning Electron Microscopy. The characterization of Amadeo’s materials and techniques used on his last paintings, as well as the investigation of some of the conservation problems that affect these paintings, is essential to enrich the knowledge on this artist. Moreover, the study of the materials in the four paintings reveals commonalities between the paintings BRUT and Entrada. This observation is supported also by the analysis of the elements present in a photograph of a collage (conserved at the Art Library of the Calouste Gulbenkian Foundation), the only remaining evidence of a supposed maquete of these paintings. The final part of the thesis describes the application of the image processing tools developed in the first part of the thesis on a set of case studies; this experience demonstrates the potential of the tool to support painting analysis and authentication studies. The brushstroke analysis was used as additional analysis on the evaluation process of four paintings attributed to Amadeo, and the system based on hyperspectral analysis was applied on the painting dated 1917. The case studies therefore serve as a bridge between the first two parts of the dissertation.
Resumo:
The expression of P2Z/P2X7 purinoceptor in different cell types is well established. This receptor is a member of the ionotropic P2X receptor family, which is composed by seven cloned receptor subtypes (P2X1 - P2X7). Interestingly, the P2Z/P2X7 has a unique feature of being linked to a non-selective pore which allows the passage of molecules up to 900 Da depending on the cell type. Early studies of P2Z/P2X7 purinoceptor were exclusively based on classical pharmacological studies but the recent tools of molecular biology have enriched the analysis of the receptor expression. The majority of assays and techniques chosen so far to study the expression of P2Z/P2X7 receptor explore directly or indirectly the effects of the opening of P2Z/P2X7 linked pore. In this review we describe the main techniques used to study the expression and functionality of P2Z/P2X7 receptor. Additionally, the increasing need and importance of a multifunctional analysis of P2Z/P2X7 expression based on flow cytometry technology is discussed, as well as the adoption of a more complete analysis of P2Z/P2X7 expression involving different techniques.
Resumo:
Introduction: The high prevalence of disease-related hospital malnutrition justifies the need for screening tools and early detection in patients at risk for malnutrition, followed by an assessment targeted towards diagnosis and treatment. At the same time there is clear undercoding of malnutrition diagnoses and the procedures to correct it Objectives: To describe the INFORNUT program/ process and its development as an information system. To quantify performance in its different phases. To cite other tools used as a coding source. To calculate the coding rates for malnutrition diagnoses and related procedures. To show the relationship to Mean Stay, Mortality Rate and Urgent Readmission; as well as to quantify its impact on the hospital Complexity Index and its effect on the justification of Hospitalization Costs. Material and methods: The INFORNUT® process is based on an automated screening program of systematic detection and early identification of malnourished patients on hospital admission, as well as their assessment, diagnoses, documentation and reporting. Of total readmissions with stays longer than three days incurred in 2008 and 2010, we recorded patients who underwent analytical screening with an alert for a medium or high risk of malnutrition, as well as the subgroup of patients in whom we were able to administer the complete INFORNUT® process, generating a report for each.
Resumo:
Updated list of ant species (Hymenoptera, Formicidae) recorded in Santa Catarina State, southern Brazil, with a discussion of research advances and priorities. A first working list of ant species registered in Santa Catarina State, southern Brazil was published recently. Since then, many studies with ants have been conducted in the state. With data compiled from published studies and collections in various regions of the state, we present here an updated list of 366 species (and 17 subspecies) in 70 ant genera in Santa Catarina, along with their geographical distribution in the seven state mesoregions. Two hundred and seven species are recorded in the Oeste mesoregion, followed by Vale do Itajaí (175), Grande Florianópolis (150), Norte (60), Sul (41), Meio Oeste (23) and Planalto Serrano (12). The increase in the number of records since 1999 results from the use of recently adopted sampling methods and techniques in regions and ecosystems poorly known before, and from the availability of new tools for the identification of ants. Our study highlights the Meio Oeste, Planalto Serrano, Sul and Norte mesoregions, as well as the deciduous forest, mangrove, grassland and coastal sand dune ecosystems as priority study areas in order to attain a more complete knowledge of the ant fauna in Santa Catarina State.
Resumo:
Surface topography and light scattering were measured on 15 samples ranging from those having smooth surfaces to others with ground surfaces. The measurement techniques included an atomic force microscope, mechanical and optical profilers, confocal laser scanning microscope, angle-resolved scattering, and total scattering. The samples included polished and ground fused silica, silicon carbide, sapphire, electroplated gold, and diamond-turned brass. The measurement instruments and techniques had different surface spatial wavelength band limits, so the measured roughnesses were not directly comparable. Two-dimensional power spectral density (PSD) functions were calculated from the digitized measurement data, and we obtained rms roughnesses by integrating areas under the PSD curves between fixed upper and lower band limits. In this way, roughnesses measured with different instruments and techniques could be directly compared. Although smaller differences between measurement techniques remained in the calculated roughnesses, these could be explained mostly by surface topographical features such as isolated particles that affected the instruments in different ways.
Resumo:
The legislatives evolutions imply an important recourse to the psychiatric expertise in order to evaluate the potential dangerousness of a subject. However, in spite of the development of techniques and tools for this evaluation, the dangerousness assessment of a subject is in practice extremely complex and discussed in the scientific literature. The evolution of the concept of dangerousness to the risk assessment involved a technicisation of this evaluation which should not make forget the limits of these tools and the need for restoring the subject, the meaning and the clinic in this evaluation.
Resumo:
Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.
Resumo:
Tutkielman päätavoitteena oli tutkia miten projektisalkun hallinnalla voidaan tukea organisaation strategista ohjausta ja liiketoimintaa. Tämän lisäksi avaintavoitteina oli kuvata projektisalkun hallinnan nykytilanne kohdeyrityksessä, paljastaa erityisiä kehitystarpeita ja lopulta luoda kohdeyrityksen projektisalkunhallinnalle tavoitetila. Kirjallisuuskatsauksessa pohdittiin projektisalkun hallinnan roolia ja tavoitteita, projektisalkun hallinnassa käyttävää prosessia, sekä menetelmiä ja tekniikoita, joilla salkkua hallitaan. Työn empiirisessä osassa syvennyttiin tutkimaan projektinsalkun hallintaan liittyviä erityispiirteitä kohdeyrityksessä. Tutkimustulosten huolellinen analysointi osoitti, että aikaisempi kirjallisuus ei riittävästi huomioi kokonaisvaltaisen, integroidun lähestymistavan tarvetta ja viestinnän tärkeyttä projektisalkun hallinnassa. Tutkimuksen johtopäätöksinä luotiin uusi integroitu projektisalkun hallintamalli ja määriteltiin kohdeyritykselle projektisalkun hallinnan tavoitetila sekä ne kehitysaskeleet, joita yrityksen tulisi lähitulevaisuudessa ottaa.
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.