956 resultados para Terrorism - Databases - Research
Resumo:
Globally, areas categorically known to be free of human visitation are rare, but still exist in Antarctica. Such areas may be among the most pristine locations remaining on Earth and, therefore, be valuable as baselines for future comparisons with localities impacted by human activities, and as sites preserved for scientific research using increasingly sophisticated future technologies. Nevertheless, unvisited areas are becoming increasingly rare as the human footprint expands in Antarctica. Therefore, an understanding of historical and contemporary levels of visitation at locations across Antarctica is essential to a) estimate likely cumulative environmental impact, b) identify regions that may have been impacted by non-native species introductions, and c) inform the future designation of protected areas under the Antarctic Treaty System. Currently, records of Antarctic tourist visits exist, but little detailed information is readily available on the spatial and temporal distribution of national governmental programme activities in Antarctica. Here we describe methods to fulfil this need. Using information within field reports and archive and science databases pertaining to the activities of the United Kingdom as an illustration, we describe the history and trends in its operational footprint in the Antarctic Peninsula since c. 1944. Based on this illustration, we suggest that these methodologies could be applied productively more generally.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
The ARKdb genome databases provide comprehensive public repositories for genome mapping data from farmed species and other animals (http://www.thearkdb.org) providing a resource similar in function to that offered by GDB or MGD for human or mouse genome mapping data, respectively. Because we have attempted to build a generic mapping database, the system has wide utility, particularly for those species for which development of a specific resource would be prohibitive. The ARKdb genome database model has been implemented for 10 species to date. These are pig, chicken, sheep, cattle, horse, deer, tilapia, cat, turkey and salmon. Access to the ARKdb databases is effected via the World Wide Web using the ARKdb browser and Anubis map viewer. The information stored includes details of loci, maps, experimental methods and the source references. Links to other information sources such as PubMed and EMBL/GenBank are provided. Responsibility for data entry and curation is shared amongst scientists active in genome research in the species of interest. Mirror sites in the United States are maintained in addition to the central genome server at Roslin.
Resumo:
Accurate multiple alignments of 86 domains that occur in signaling proteins have been constructed and used to provide a Web-based tool (SMART: simple modular architecture research tool) that allows rapid identification and annotation of signaling domain sequences. The majority of signaling proteins are multidomain in character with a considerable variety of domain combinations known. Comparison with established databases showed that 25% of our domain set could not be deduced from SwissProt and 41% could not be annotated by Pfam. SMART is able to determine the modular architectures of single sequences or genomes; application to the entire yeast genome revealed that at least 6.7% of its genes contain one or more signaling domains, approximately 350 greater than previously annotated. The process of constructing SMART predicted (i) novel domain homologues in unexpected locations such as band 4.1-homologous domains in focal adhesion kinases; (ii) previously unknown domain families, including a citron-homology domain; (iii) putative functions of domain families after identification of additional family members, for example, a ubiquitin-binding role for ubiquitin-associated domains (UBA); (iv) cellular roles for proteins, such predicted DEATH domains in netrin receptors further implicating these molecules in axonal guidance; (v) signaling domains in known disease genes such as SPRY domains in both marenostrin/pyrin and Midline 1; (vi) domains in unexpected phylogenetic contexts such as diacylglycerol kinase homologues in yeast and bacteria; and (vii) likely protein misclassifications exemplified by a predicted pleckstrin homology domain in a Candida albicans protein, previously described as an integrin.
Resumo:
With the re-emergence of insurgency tied to terrorism, governments need to strategically manage their communications. This paper analyzes the effect of the Spanish government’s messaging in the face of the Madrid bombing of March 11, 2004: unlike what happened with the 9/11 bombings in the USA and the 7/07 London attacks, the Spanish media did not support the government’s framing of the events. Taking framing as a strategic action in a discursive form (Pan & Kosicki, 2003), and in the context of the attribution theory of responsibilities, this research uses the “cascading activation” model (Entman, 2003, 2004) to explore how a framing contest was generated in the press. Analysis of the coverage shows that the intended government frame triggered a battle among the different major newspapers, leading editorials to shift their frame over the four days prior to the national elections. This research analyzes strategic contests in framing processes and contributes insight into the interactions among the different sides (government, parties, media, and citizens) to help bring about an understanding of the rebuttal effect of the government’s intended frame. It also helps to develop an understanding of the role of the media and the influence of citizens’ frames on media content.
Resumo:
Natural Language Interfaces to Query Databases (NLIDBs) have been an active research field since the 1960s. However, they have not been widely adopted. This article explores some of the biggest challenges and approaches for building NLIDBs and proposes techniques to reduce implementation and adoption costs. The article describes {AskMe*}, a new system that leverages some of these approaches and adds an innovative feature: query-authoring services, which lower the entry barrier for end users. Advantages of these approaches are proven with experimentation. Results confirm that, even when {AskMe*} is automatically reconfigurable against multiple domains, its accuracy is comparable to domain-specific NLIDBs.
Resumo:
This dissertation examines novels that use terrorism to allegorize the threatened position of the literary author in contemporary culture. Allegory is a term that has been differently understood over time, but which has consistently been used by writers to articulate and construct their roles as authors. In the novels I look at, the terrorist challenge to authorship results in multiple deployments of allegory, each differently illustrating the way that allegory is used and authorship constructed in the contemporary American novel. Don DeLillo’s Mao II (1991), first puts terrorists and authors in an oppositional pairing. The terrorist’s ability to traffic in spectacle is presented as indicative of the author’s fading importance in contemporary culture and it is one way that terrorism allegorizes threats to authorship. In Philip Roth’s Operation Shylock (1993), the allegorical pairing is between the text of the novel and outside texts – newspaper reports, legal cases, etc. – that the novel references and adapts in order to bolster its own narrative authority. Richard Powers’s Plowing the Dark (1999) pairs the story of an imprisoned hostage, craving a single book, with employees of a tech firm who are creating interactive, virtual reality artworks. Focusing on the reader’s experience, Powers’s novel posits a form of authorship that the reader can take into consideration, but which does not seek to control the experience of the text. Finally, I look at two of Paul Auster’s twenty-first century novels, Travels in the Scriptorium (2007) and Man in the Dark (2008), to suggest that the relationship between representations of authors and terrorists changed after 9/11. Auster’s author-figures forward an ethics of authorship whereby novels can use narrative to buffer readers against the portrayal of violent acts in a culture that is suffused with traumatizing imagery.
Resumo:
Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.
Resumo:
In July 2011, the European Commission published a Communication aimed at setting out different options for establishing a European terrorist finance tracking system (TFTS). The Communication followed the adoption of the EU-US agreement on the US Terrorist Finance Tracking Program (TFTP) in 2010. The agreement concluded various series of national, European and transatlantic negotiations after the disclosure through public media of the US TFTP in 2006. This paper takes stock of the wide range of controversies surrounding this security-focused programme with dataveillance capabilities. After stressing the impact of the US TFTP on international relations, the paper argues that the EU-US agreement primarily has the effect of shifting information-sharing practices from the justice/judicial/penal/criminal investigation framework into the security/intelligence/administrative/prevention context as the main rationale. The paper then questions the TFTP-related conception of mass intelligence through large-scale databases and transnational communication of bulk data in the name of targeted surveillance. Following an examination of the project creating an EU system equivalent to the TFTP, the paper emphasises the fundamental paradox of transatlantic security matters, in which European criticism of American programmes tends to be ultimately translated into EU imitation of US dataveillance practices.
Resumo:
The Israeli economy has been subjected to a continuous flow of terror since its creation. The focus of our study is how terrorism's impacts the level of production of the high-tech sector. The two main objectives and novelties of the paper are: First our paper can be a major contribution to as yet a new subfield in applied microeconomics attempting to measure the impact of terror on economic issues like FDI and Industry output in general and that focused on the High-Tech industries. Second our research is designed to help us understand the impact of terror news, especially on U.S citizens' reactions to terror in Israel. Our terror variable isn't how many bodies or but how many negative articles regarding terror there are in leading American news paper. We focus on the volatility clustering of terror information arrivals in major US financial papers and its impact on high-tech production and FDI to Israel.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Mode of access: Internet.
Resumo:
Transportation Department, Office of Environment and Safety, Washington, D.C.
Resumo:
Transportation Department, Washington, D.C.
Resumo:
"HRDI-12/11-06(2M)E"--P. [4] of cover.