918 resultados para Information interfaces and presentation
Resumo:
While a variety of crisis types loom as real risks for organizations and communities, and the media landscape continues to evolve, research is needed to help explain and predict how people respond to various kinds of crisis and disaster information. For example, despite the rising prevalence of digital and mobile media centered on still and moving visuals, and stark increases in Americans’ use of visual-based platforms for seeking and sharing disaster information, relatively little is known about how the presence or absence of disaster visuals online might prompt or deter resilience-related feelings, thoughts, and/or behaviors. Yet, with such insights, governmental and other organizational entities as well as communities themselves may best help individuals and communities prepare for, cope with, and recover from adverse events. Thus, this work uses the theoretical lens of the social-mediated crisis communication model (SMCC) coupled with the limited capacity model of motivated mediated message processing (LC4MP) to explore effects of disaster information source and visuals on viewers’ resilience-related responses to an extreme flooding scenario. Results from two experiments are reported. First a preliminary 2 (disaster information source: organization/US National Weather Service vs. news media/USA Today) x 2 (disaster visuals: no visual podcast vs. moving visual video) factorial between-subjects online experiment with a convenience sample of university students probes effects of crisis source and visuals on a variety of cognitive, affective, and behavioral outcomes. A second between-subjects online experiment manipulating still and moving visual pace in online videos (no visual vs. still, slow-pace visual vs. still, medium-pace visual vs. still, fast-pace visual vs. moving, slow-pace visual vs. moving, medium-pace visual vs. moving, fast-pace visual) with a convenience sample recruited from Amazon’s Mechanical Turk (mTurk) similarly probes a variety of potentially resilience-related cognitive, affective, and behavioral outcomes. The role of biological sex as a quasi-experimental variable is also investigated in both studies. Various implications for community resilience and recommendations for risk and disaster communicators are explored. Implications for theory building and future research are also examined. Resulting modifications of the SMCC model (i.e., removing “message strategy” and adding the new category of “message content elements” under organizational considerations) are proposed.
Resumo:
This paper analyses the intermediary role of the technical bodies that support the use of budgetary and financial information by central government politicians in Portugal. The main findings show that information brokers are playing a central role in preparing this information in a credible, simple and understandable way. However, even if not intentionally, the information they present can be biased. Politicians need to be aware that the information brokers they rely on may not be giving them ‘neutral’ information.
Resumo:
Analysis of data without labels is commonly subject to scrutiny by unsupervised machine learning techniques. Such techniques provide more meaningful representations, useful for better understanding of a problem at hand, than by looking only at the data itself. Although abundant expert knowledge exists in many areas where unlabelled data is examined, such knowledge is rarely incorporated into automatic analysis. Incorporation of expert knowledge is frequently a matter of combining multiple data sources from disparate hypothetical spaces. In cases where such spaces belong to different data types, this task becomes even more challenging. In this paper we present a novel immune-inspired method that enables the fusion of such disparate types of data for a specific set of problems. We show that our method provides a better visual understanding of one hypothetical space with the help of data from another hypothetical space. We believe that our model has implications for the field of exploratory data analysis and knowledge discovery.
Resumo:
This paper proposes a principal-agent model between banks and firms with risk and asymmetric information. A mixed form of finance to firms is assumed. The capital structure of firms is a relevant cause for the final aggregate level of investment in the economy. In the model analyzed, there may be a separating equilibrium, which is not economically efficient, because aggregate investments fall short of the first-best level. Based on European firm-level data, an empirical model is presented which validates the result of the relevance of the capital structure of firms. The relative magnitude of equity in the capital structure makes a real difference to the profits obtained by firms in the economy.
Resumo:
We investigate the ways young children’s use of mobile touchscreen interfaces is both understood and shaped by parents through the production of YouTube videos and discussions in associated comment threads. This analysis expands on, and departs from, theories of parental mediation, which have traditionally been framed through a media effects approach in analyzing how parents regulate their children’s use of broadcast media, such as television, within family life. We move beyond the limitations of an effects framing through more culturally and materially oriented theoretical lenses of mediation, considering the role mobile interfaces now play in the lives of infants through analysis of the ways parents intermediate between domestic spaces and networked publics. We propose the concept of intermediation, which builds on insights from critical interface studies as well as cultural industries literature to help account for these expanded aspects of digital parenting. Here, parents are not simply moderating children’s media use within the home, but instead operating as an intermediary in contributing to online representations and discourses of children’s digital culture. This intermediary role of parents engages with ideological tensions in locating notions of “naturalness:” the iPad’s gestural interface or the child’s digital dexterity.
Resumo:
This paper analyses reconfigurations of play in emergent digital materialities of game design. It extends recent work examining dimensions of hybridity in playful products by turning attention to interfaces, practices and spaces, rather than devices. We argue that the concept of hybrid play relies on predefining clear and distinct digital or material entities that then enter into hybrid situations. Drawing on concepts of the ‘interface’ and ‘postdigital’, we argue the distribution of computing devices creates difficulties for such presuppositions. Instead, we propose thinking these situations through an ‘aesthetic of recruitment’ that is able to accommodate the intensive entanglements and inherent openness of both the social and technical in postdigital play.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
This study aimed to survey farmers knowledge and practices on the management of pastures, stocking rates and markets of meat goat-producing enterprises within New South Wales and Queensland, Australia. An interview-based questionnaire was conducted on properties that derived a significant proportion of their income from goats. The survey covered 31 landholders with a total land area of 567 177 ha and a reported total of 160 010 goats. A total of 55% (17/31) of producers were involved in both opportunistic harvesting and commercial goat operations, and 45% (14/31) were specialised seedstock producers. Goats were the most important livestock enterprise on 55% (17/31) of surveyed properties. Stocking rate varied considerably (0.3?9.3 goats/ha) within and across surveyed properties and was found to be negatively associated with property size and positively associated with rainfall. Overall, 81% (25/31) of producers reported that the purpose of running goats on their properties was to target international markets. Producers also cited the importance of targeting markets as a way to increase profitability. Fifty-three percent of producers were located over 600 km from a processing plant and the high cost of freight can limit the continuity of goats supplied to abattoirs. Fencing was an important issue for goat farmers, with many producers acknowledging this could potentially add to capital costs associated with better goat management and production. Producers in the pastoral regions appear to have a low investment in pasture development and opportunistic goat harvesting appears to be an important source of income.
Resumo:
The role of non-neuronal brain cells, called astrocytes, is emerging as crucial in brain function and dysfunction, encompassing the neurocentric concept that was envisioning glia as passive components. Ion and water channels and calcium signalling, expressed in functional micro and nano domains, underpin astrocytes’ homeostatic function, synaptic transmission, neurovascular coupling acting either locally and globally. In this respect, a major issue arises on the mechanism through which astrocytes can control processes across scales. Finally, astrocytes can sense and react to extracellular stimuli such as chemical, physical, mechanical, electrical, photonic ones at the nanoscale. Given their emerging importance and their sensing properties, my PhD research program had the general goal to validate nanomaterials, interfaces and devices approaches that were developed ad-hoc to study astrocytes. The results achieved are reported in the form of collection of papers. Specifically, we demonstrated that i) electrospun nanofibers made of polycaprolactone and polyaniline conductive composites can shape primary astrocytes’ morphology, without affecting their function ii) gold coated silicon nanowires devices enable extracellular recording of unprecedented slow wave in primary differentiated astrocytes iii) colloidal hydrotalcites films allow to get insight in cell volume regulation process in differentiated astrocytes and to describe novel cytoskeletal actin dynamics iv) gold nanoclusters represent nanoprobe to trigger astrocytes structure and function v) nanopillars of photoexcitable organic polymer are potential tool to achieve nanoscale photostimulation of astrocytes. The results were achieved by a multidisciplinary team working with national and international collaborators that are listed and acknowledged in the text. Collectively, the results showed that astrocytes represent a novel opportunity and target for Nanoscience, and that Nanoglial interface might help to unveil clues on brain function or represent novel therapeutic approach to treat brain dysfunctions.
Resumo:
Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.
Resumo:
Histidines 107 and 109 in the glycine receptor ( GlyR) alpha(1) subunit have previously been identified as determinants of the inhibitory zinc-binding site. Based on modeling of the GlyR alpha(1) subunit extracellular domain by homology to the acetylcholine-binding protein crystal structure, we hypothesized that inhibitory zinc is bound within the vestibule lumen at subunit interfaces, where it is ligated by His(107) from one subunit and His(109) from an adjacent subunit. This was tested by co-expressing alpha(1) subunits containing the H107A mutation with alpha(1) subunits containing the H109A mutation. Although sensitivity to zinc inhibition is markedly reduced when either mutation is individually incorporated into all five subunits, the GlyRs formed by the co-expression of H107A mutant subunits with H109A mutant subunits exhibited an inhibitory zinc sensitivity similar to that of the wild type alpha(1) homomeric GlyR. This constitutes strong evidence that inhibitory zinc is coordinated at the interface between adjacent alpha(1) subunits. No evidence was found for beta subunit involvement in the coordination of inhibitory zinc, indicating that a maximum of two zinc-binding sites per alpha(1)beta receptor is sufficient for maximal zinc inhibition. Our data also show that two zinc-binding sites are sufficient for significant inhibition of alpha(1) homomers. The binding of zinc at the interface between adjacent alpha(1) subunits could restrict intersubunit movements, providing a feasible mechanism for the inhibition of channel activation by zinc.
Resumo:
The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas Ambientais
Resumo:
The profiling of MDMA tablets can be carried out using different sets of characteristics. The first type of measurements performed on MDMA tablets are physical characteristics (i.e. post-tabletting characteristics). They yield preliminary profiling data that may be valuable in a first stage for investigation purposes. However organic impurities (i.e. pre-tabletting characteristics) are generally considered to bring more reliable information, particularly for presentation of evidence in court. This work aimed therefore at evaluating the added value of combining pre-tabletting characteristics and post-tabletting characteristics of seized MDMA tablets. In approximately half of the investigated cases, the post-tabletting links were confirmed with organic impurities analyses. In the remaining cases, post-tabletting batches (post-TBs) were divided in several pre-tabletting batches (pre-TBs), thus supporting the hypothesis that several production batches of MDMA powder (pre-TBs) were used to produce one single post-TB (i.e. tablets having the same shape, diameter, thickness, weight and score; but different organic impurities composition). In view of the obtained results, the hypotheses were discussed through illustrating examples. In conclusion, both sets of characteristics were found relevant alone and combined together. They actually provide distinct information about MDMA illicit production and trafficking.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.