918 resultados para Information interfaces and presentation: Miscellaneous.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to survey farmers knowledge and practices on the management of pastures, stocking rates and markets of meat goat-producing enterprises within New South Wales and Queensland, Australia. An interview-based questionnaire was conducted on properties that derived a significant proportion of their income from goats. The survey covered 31 landholders with a total land area of 567 177 ha and a reported total of 160 010 goats. A total of 55% (17/31) of producers were involved in both opportunistic harvesting and commercial goat operations, and 45% (14/31) were specialised seedstock producers. Goats were the most important livestock enterprise on 55% (17/31) of surveyed properties. Stocking rate varied considerably (0.3?9.3 goats/ha) within and across surveyed properties and was found to be negatively associated with property size and positively associated with rainfall. Overall, 81% (25/31) of producers reported that the purpose of running goats on their properties was to target international markets. Producers also cited the importance of targeting markets as a way to increase profitability. Fifty-three percent of producers were located over 600 km from a processing plant and the high cost of freight can limit the continuity of goats supplied to abattoirs. Fencing was an important issue for goat farmers, with many producers acknowledging this could potentially add to capital costs associated with better goat management and production. Producers in the pastoral regions appear to have a low investment in pasture development and opportunistic goat harvesting appears to be an important source of income.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of non-neuronal brain cells, called astrocytes, is emerging as crucial in brain function and dysfunction, encompassing the neurocentric concept that was envisioning glia as passive components. Ion and water channels and calcium signalling, expressed in functional micro and nano domains, underpin astrocytes’ homeostatic function, synaptic transmission, neurovascular coupling acting either locally and globally. In this respect, a major issue arises on the mechanism through which astrocytes can control processes across scales. Finally, astrocytes can sense and react to extracellular stimuli such as chemical, physical, mechanical, electrical, photonic ones at the nanoscale. Given their emerging importance and their sensing properties, my PhD research program had the general goal to validate nanomaterials, interfaces and devices approaches that were developed ad-hoc to study astrocytes. The results achieved are reported in the form of collection of papers. Specifically, we demonstrated that i) electrospun nanofibers made of polycaprolactone and polyaniline conductive composites can shape primary astrocytes’ morphology, without affecting their function ii) gold coated silicon nanowires devices enable extracellular recording of unprecedented slow wave in primary differentiated astrocytes iii) colloidal hydrotalcites films allow to get insight in cell volume regulation process in differentiated astrocytes and to describe novel cytoskeletal actin dynamics iv) gold nanoclusters represent nanoprobe to trigger astrocytes structure and function v) nanopillars of photoexcitable organic polymer are potential tool to achieve nanoscale photostimulation of astrocytes. The results were achieved by a multidisciplinary team working with national and international collaborators that are listed and acknowledged in the text. Collectively, the results showed that astrocytes represent a novel opportunity and target for Nanoscience, and that Nanoglial interface might help to unveil clues on brain function or represent novel therapeutic approach to treat brain dysfunctions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Histidines 107 and 109 in the glycine receptor ( GlyR) alpha(1) subunit have previously been identified as determinants of the inhibitory zinc-binding site. Based on modeling of the GlyR alpha(1) subunit extracellular domain by homology to the acetylcholine-binding protein crystal structure, we hypothesized that inhibitory zinc is bound within the vestibule lumen at subunit interfaces, where it is ligated by His(107) from one subunit and His(109) from an adjacent subunit. This was tested by co-expressing alpha(1) subunits containing the H107A mutation with alpha(1) subunits containing the H109A mutation. Although sensitivity to zinc inhibition is markedly reduced when either mutation is individually incorporated into all five subunits, the GlyRs formed by the co-expression of H107A mutant subunits with H109A mutant subunits exhibited an inhibitory zinc sensitivity similar to that of the wild type alpha(1) homomeric GlyR. This constitutes strong evidence that inhibitory zinc is coordinated at the interface between adjacent alpha(1) subunits. No evidence was found for beta subunit involvement in the coordination of inhibitory zinc, indicating that a maximum of two zinc-binding sites per alpha(1)beta receptor is sufficient for maximal zinc inhibition. Our data also show that two zinc-binding sites are sufficient for significant inhibition of alpha(1) homomers. The binding of zinc at the interface between adjacent alpha(1) subunits could restrict intersubunit movements, providing a feasible mechanism for the inhibition of channel activation by zinc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas Ambientais

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The profiling of MDMA tablets can be carried out using different sets of characteristics. The first type of measurements performed on MDMA tablets are physical characteristics (i.e. post-tabletting characteristics). They yield preliminary profiling data that may be valuable in a first stage for investigation purposes. However organic impurities (i.e. pre-tabletting characteristics) are generally considered to bring more reliable information, particularly for presentation of evidence in court. This work aimed therefore at evaluating the added value of combining pre-tabletting characteristics and post-tabletting characteristics of seized MDMA tablets. In approximately half of the investigated cases, the post-tabletting links were confirmed with organic impurities analyses. In the remaining cases, post-tabletting batches (post-TBs) were divided in several pre-tabletting batches (pre-TBs), thus supporting the hypothesis that several production batches of MDMA powder (pre-TBs) were used to produce one single post-TB (i.e. tablets having the same shape, diameter, thickness, weight and score; but different organic impurities composition). In view of the obtained results, the hypotheses were discussed through illustrating examples. In conclusion, both sets of characteristics were found relevant alone and combined together. They actually provide distinct information about MDMA illicit production and trafficking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: In spite of the relapsing nature of inflammatory bowel diseases (IBD), on average, 40% of IBD patients are nonadherent to treatments. On the other hand, they are often actively seeking information on their disease. The relationship between information seeking behaviour and adherence to treatment is poorly documented. The main aim of this study was to examine this association among IBD patients. Methods: We used data from the Swiss IBD cohort study. Baseline data included questions on adherence to ongoing treatments. A survey was conducted in October 2009 to assess information sources and themes searched by patients. Crude odds ratio (OR) and 95% CI were calculated for the association between adherence and information seeking. Adjustment for potential confounders and main known risk factors was performed using multivariate logistic regression. Differences in the proportions of information sources and themes were compared between adherent and non-adherent patients. Results: The number of patients eligible was 488. Nineteen percent (N = 99) were non-adherent to treatment and one third (N = 159) were active information seekers. Crude OR for being non-adherent was 69% higher among information seekers compared to non-seekers (OR = 1.69; 95%CI 0.99 2.87). Adjusted OR for non-adherence was OR = 2.39 (95%CI 1.32 4.34) for information seekers compared to non-seekers. Family doctors were 15.2% more often consulted (p = 0.019) among patients who were adherent to treatment compared to those who were not, as were books and TV (+13.1%; p = 0.048). No difference was observed for internet or gastroenterologists as sources of information. Themes of information linked to tips for disease management were 14.2% more often searched among non-adherent patients (p = 0.028) compared to adherent. No difference was observed for the other themes (research and development on IBD, therapies, basic information on the disease, patients' experiences sharing, miscellaneous). Conclusions: Active information seeking was shown to be strongly associated with non-adherence to treatment in a population of IBD patients in Switzerland. Surprisingly themes related to therapies were not especially those on which nonadherent patients focused. Indeed, management of symptoms and everyday life with the disease seemed to be the most pressing information concerns of patients. Results suggest that the family doctor plays an important role in the multidisciplinary care approach needed for IBD patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient problem solving in cellular networks is important when enhancing the network performance and liability. Analysis of calls and packet switched sessions in protocol level between the network elements is an important part of this process. They can provide very detailed information about error situations which otherwise would be difficult to recognise. In this thesis we seek solutions for monitoring GPRS/EDGE sessions in two specific interfaces simultaneously in such manner that all information important to the users will be provided in easily understandable form. This thesis focuses on Abis and AGPRS interfaces of GSM radio network and introduces a solution for managing the correlation between these interfaces by using signalling messages and common parameters as linking elements. ~: Finally this thesis presents an implementation of GPRS/EDGE session monitoring application for Abis and AGPRS interfaces and evaluates its benefits to the end users. Application is implemented as a part of Windows based 3G/GSM network analyser.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Virtual Lightbox for Museums and Archives (VLMA) is a tool for collecting and reusing, in a structured fashion, the online contents of museums and archive datasets. It is not restricted to datasets with visual components although VLMA includes a lightbox service that enables comparison and manipulation of visual information. With VLMA, one can browse and search collections, construct personal collections, annotate them, export these collections to XML or Impress (Open Office) presentation format, and share collections with other VLMA users. VLMA was piloted as an e-Learning tool as part of JISC’s e-Learning focus in its first phase (2004-2005) and in its second phase (2005-2006) it has incorporated new partner collections while improving and expanding interfaces and services. This paper concerns its development as a research and teaching tool, especially to teachers using museum collections, and discusses the recent development of VLMA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses Shannon's information theory to give a quantitative definition of information flow in systems that transform inputs to outputs. For deterministic systems, the definition is shown to specialise to a simpler form when the information source and the known inputs jointly determine the inputs. For this special case, the definition is related to the classical security condition of non-interference and an equivalence is established between non-interference and independence of random variables. Quantitative information flow for deterministic systems is then presented in relational form. With this presentation, it is shown how relational parametricity can be used to derive upper and lower bounds on information flows through families of functions defined in the second order lambda calculus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.