972 resultados para integrated web platform
Resumo:
Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. Conclusions: The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.
Resumo:
The Mouse Tumor Biology (MTB) Database serves as a curated, integrated resource for information about tumor genetics and pathology in genetically defined strains of mice (i.e., inbred, transgenic and targeted mutation strains). Sources of information for the database include the published scientific literature and direct data submissions by the scientific community. Researchers access MTB using Web-based query forms and can use the database to answer such questions as ‘What tumors have been reported in transgenic mice created on a C57BL/6J background?’, ‘What tumors in mice are associated with mutations in the Trp53 gene?’ and ‘What pathology images are available for tumors of the mammary gland regardless of genetic background?’. MTB has been available on the Web since 1998 from the Mouse Genome Informatics web site (http://www.informatics.jax.org). We have recently implemented a number of enhancements to MTB including new query options, redesigned query forms and results pages for pathology and genetic data, and the addition of an electronic data submission and annotation tool for pathology data.
Resumo:
The BioKnowledge Library is a relational database and web site (http://www.proteome.com) composed of protein-specific information collected from the scientific literature. Each Protein Report on the web site summarizes and displays published information about a single protein, including its biochemical function, role in the cell and in the whole organism, localization, mutant phenotype and genetic interactions, regulation, domains and motifs, interactions with other proteins and other relevant data. This report describes four species-specific volumes of the BioKnowledge Library, concerned with the model organisms Saccharomyces cerevisiae (YPD), Schizosaccharomyces pombe (PombePD) and Caenorhabditis elegans (WormPD), and with the fungal pathogen Candida albicans (CalPD™). Protein Reports of each species are unified in format, easily searchable and extensively cross-referenced between species. The relevance of these comprehensively curated resources to analysis of proteins in other species is discussed, and is illustrated by a survey of model organism proteins that have similarity to human proteins involved in disease.
Resumo:
In order to evaluate taxonomic and environmental control on the preservation pattern of brachiopod accumulations, sedimentologic and taphonomic data have been integrated with those inferred from the structure of brachiopod accumulations from the easternmost Lower Jurassic Subbetic deposits in Spain. Two brachiopod communities (Praesphaeroidothyris and Securina communities) were distinguished showing a mainly free-lying way of life in soft-bottom habitats. Three taphofacies are discriminated based on proportion of disarticulation, fragmentation, packing, and shell filling. Taphofacies 1 is represented by thinly fragmented, dispersed brachiopod shells in wackestone beds. Taphofacies 2 is spatially restricted to small lenses where shells are poorly fragmented, rarely disarticulated, usually void filled, and highly packed. Taphofacies 3 is represented by mud or cement filled, loosely packed, articulated brachiopods forming large pocket-like structures. Temporal and spatial averaging were minimally involved in taphofacies 2 and 3. It is interpreted that patchy preservation implies preservation of primary original patchiness of brachiopod communities on the seafloor. The origin of shell-rich taphofacies (2 and 3) is related to rapid burial due to episodic storm activity, while shell-poor taphofacies 1 records background conditions. The nature and comparative diversity of these taphofacies underscores the importance of rapid burial for shell beds preservation. Differences in preservation between taphofacies 2 and 3 are mainly related to environmental criteria, most importantly storm energy and water depth. In contrast, the taxonomic-specific pattern of the communities is a subordinate element of control, controlling only minor within-taphofacies differences in preservation.
Resumo:
This study establishes a bridge between Web 2.0 and Crowdfunding. It shows that there is a relation between creation of content and the money collected, using a dataset of campaigns from the Kickstarter platform. Besides this, the study explores the comprehension of the society to these matters. A survey was made in a Higher Education Institution to evaluate if there is an awareness of the society to matters such as crowdfunding and Web 2.0. The study started with a literature review that sustains this theory followed by the creation of two case studies. One case study made a model that explained relation between Web 2.0 and a crowdfunding campaigns and another study that studies the awareness of the society to matters such as crowdfunding and Web 2.0. Interesting conclusions were found, showing that these subjects are still giving the first baby steps and there is relation between some creations of contents, through Web 2.0, and the money collected in a crowdfunding campaign.
Resumo:
Despite the increased offering of online communication channels to support web-based retail systems, there is limited marketing research that investigates how these channels act singly, or in combination with offline channels, to influence an individual's intention to purchase online. If the marketer's strategy is to encourage online transactions, this requires a focus on consumer acceptance of the web-based transaction technology, rather than the purchase of the products per se. The exploratory study reported in this paper examines normative influences from referent groups in an individual's on and offline social communication networks that might affect their intention to use online transaction facilities. The findings suggest that for non-adopters, there is no normative influence from referents in either network. For adopters, one online and one offline referent norm positively influenced this group's intentions to use online transaction facilities. The implications of these findings are discussed together with future research directions.
Resumo:
The roiling financial markets, constantly changing tax law and increasing complexity of planning transaction increase the demand of aggregated family wealth management (FWM) services. However, current trend of developing such advisory systems is mainly focusing on financial or investment side. In addition, these existing systems lack of flexibility and are hard to be integrated with other organizational information systems, such as CRM systems. In this paper, a novel architecture of Web-service-agents-based FWM systems has been proposed. Multiple intelligent agents are wrapped as Web services and can communicate with each other via Web service protocols. On the one hand, these agents can collaborate with each other and provide comprehensive FWM advices. On the other hand, each service can work independently to achieve its own tasks. A prototype system for supporting financial advice is also presented to demonstrate the advances of the proposed Webservice- agents-based FWM system architecture.
Resumo:
E-atmospherics have motivated an emerging body of research which reports that both virtual layouts and atmospherics encourage consumers to modify their shopping habits. While the literature has analyzed mainly the functional aspect of e-atmospherics, little has been done in terms of linking its characteristics’ to social (co-) creation. This paper focuses on the anatomy of social dimension in relation to e-atmospherics, which includes factors such as the aesthetic design of space, the influence of visual cues, interpretation of shopping as a social activity and meaning of appropriate interactivity. We argue that web designers are social agents who interact within intangible social reference sets, restricted by social standards, value, beliefs, status and duties embedded within their local geographies. We aim to review the current understanding of the importance and voluntary integration of social cues displayed by web designers from a mature market and an emerging market, and provides an analysis based recommendation towards the development of an integrated e-social atmospheric framework. Results report the findings from telephone interviews with an exploratory set of 10 web designers in each country. This allows us to re-interpret the web designers’ reality regarding social E-atmospherics. We contend that by comprehending (before any consumer input) social capital, daily micro practices, habits and routine, deeper understanding of social e-atmospherics preparatory, initial stages and expected functions will be acquired.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing uncertainty. On the other hand, the rich array of modelling frameworks and simulation tools which support uncertainty propagation in complex and chained models typically lack the benefits of web based solutions such as ready publication, discoverability and easy access. In this article we describe the developments within the UncertWeb project which are designed to provide uncertainty support in the context of the proposed ‘Model Web’. We give an overview of uncertainty in modelling, review uncertainty management in existing modelling frameworks and consider the semantic and interoperability issues raised by integrated modelling. We describe the scope and architecture required to support uncertainty management as developed in UncertWeb. This includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis. We conclude by highlighting areas that require further research and development in UncertWeb, such as model calibration and inference within complex environmental models.