830 resultados para blended workflow
Resumo:
The service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and workflows and the inefficiency of many workflow systems with regard to handling large datasets. We present the Styx Grid Service, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs. Styx Grid Services are very easy to create and use and can be composed into powerful workflows with simple shell scripts or more sophisticated graphical tools. An important feature of the system is that data can be streamed directly from service to service, significantly increasing the efficiency of workflows that use large data volumes. The status and progress of Styx Grid Services can be monitored asynchronously using a mechanism that places very few demands on firewalls. We show how Styx Grid Services can interoperate with with Web Services and WS-Resources using suitable adapters.
Resumo:
The reuse of treated wastewater (reclaimed water) for irrigation is a valuable strategy to maximise available water resources, but the often marginal quality of the water can present agricultural challenges. Semi-structured interviews were held with Jordanian farmers to explore how they perceive the quality of reclaimed water. Of the 11 farmers interviewed who irrigate with reclaimed water directly near treatment plants, 10 described reclaimed water either positively or neutrally. In contrast, 27 of the 39 farmers who use reclaimed water indirectly, after it is blended with fresh water, viewed the resource negatively, although 23 of the indirect reuse farmers also recognised the nutrient benefits. Farmer perception of reclaimed water may be a function of its quality, but consideration should also be given to farmers’ capacity to manage the agricultural challenges associated with reclaimed water (salinity, irrigation system damage, marketing of produce), their actual and perceived capacity to control where and when reclaimed water is used, and their capacity to influence the quality of the water delivered to the farm.
Resumo:
Quantitation is an inherent requirement in comparative proteomics and there is no exception to this for plant proteomics. Quantitative proteomics has high demands on the experimental workflow, requiring a thorough design and often a complex multi-step structure. It has to include sufficient numbers of biological and technical replicates and methods that are able to facilitate a quantitative signal read-out. Quantitative plant proteomics in particular poses many additional challenges but because of the nature of plants it also offers some potential advantages. In general, analysis of plants has been less prominent in proteomics. Low protein concentration, difficulties in protein extraction, genome multiploidy, high Rubisco abundance in green tissue, and an absence of well-annotated and completed genome sequences are some of the main challenges in plant proteomics. However, the latter is now changing with several genomes emerging for model plants and crops such as potato, tomato, soybean, rice, maize and barley. This review discusses the current status in quantitative plant proteomics (MS-based and non-MS-based) and its challenges and potentials. Both relative and absolute quantitation methods in plant proteomics from DIGE to MS-based analysis after isotope labeling and label-free quantitation are described and illustrated by published studies. In particular, we describe plant-specific quantitative methods such as metabolic labeling methods that can take full advantage of plant metabolism and culture practices, and discuss other potential advantages and challenges that may arise from the unique properties of plants.
Resumo:
Metabolic stable isotope labeling is increasingly employed for accurate protein (and metabolite) quantitation using mass spectrometry (MS). It provides sample-specific isotopologues that can be used to facilitate comparative analysis of two or more samples. Stable Isotope Labeling by Amino acids in Cell culture (SILAC) has been used for almost a decade in proteomic research and analytical software solutions have been established that provide an easy and integrated workflow for elucidating sample abundance ratios for most MS data formats. While SILAC is a discrete labeling method using specific amino acids, global metabolic stable isotope labeling using isotopes such as (15)N labels the entire element content of the sample, i.e. for (15)N the entire peptide backbone in addition to all nitrogen-containing side chains. Although global metabolic labeling can deliver advantages with regard to isotope incorporation and costs, the requirements for data analysis are more demanding because, for instance for polypeptides, the mass difference introduced by the label depends on the amino acid composition. Consequently, there has been less progress on the automation of the data processing and mining steps for this type of protein quantitation. Here, we present a new integrated software solution for the quantitative analysis of protein expression in differential samples and show the benefits of high-resolution MS data in quantitative proteomic analyses.
Resumo:
The changes occurring in the levels of nutritionally relevant oil components were assessed during repeated frying of potato chips in a blend of palm olein and canola oil (1:1 w/w). The blend suffered minimal reductions in omega-3 and omega-6 polyunsaturated fatty acids. There was no significant difference between the fatty acid composition of the oil extracted from the product and that of the frying medium, in all three cases. The blend also contained a significant amount of tocols which add a nutritional value to the oil. The concentration of the tocols was satisfactorily retained over the period of oil usage, in contrast to the significant loses observed in the case of the individual oils. The blend also performed well when assessed by changes in total polar compounds, free fatty acids, p-anisidine value. When fried in used oil, the product oil content increased progressively with oil usage time. This study shows that blended frying oils can combine good stability and nutritional quality
Resumo:
Fieldwork in a major construction programme is used to examine what is meant by professionalism where large integrated digital systems are used to design, deliver, and maintain buildings and infrastructure. The increasing ‘professionalization’ of the client is found to change other professional roles and interactions in project delivery. New technologies for approvals and workflow monitoring are associated with new occupational groups; new kinds of professional accountability; and a greater integration across professional roles. Further conflicts also arise, where occupational groups have different understandings of project deliverables and how they are competently achieved. The preliminary findings are important for an increasing policy focus on shareable data, in order for building owners and operators to improve the cost, value, handover and operation of complex buildings. However, it will also have an impact on wider public decision-making processes, professional autonomy, expertise and interdependence. These findings are considered in relation to extant literatures, which problematize the idea of professionalism; and the shift from drawings to shareable data as deliverables. The implications for ethics in established professions and other occupational groups are discussed; directions are suggested for further scholarship on professionalism in digitally mediated project work to improve practices which will better serve society.
Resumo:
Human-made transformations to the environment, and in particular the land surface, are having a large impact on the distribution (in both time and space) of rainfall, upon which all life is reliant. Focusing on precipitation, soil moisture and near-surface temperature, we compare data from Phase 5 of the Climate Modelling Intercomparison Project (CMIP5), as well as blended observational–satellite data, to see how the interaction between rainfall and the land surface differs (or agrees) between the models and reality, at daily timescales. As expected, the results suggest a strong positive relationship between precipitation and soil moisture when precipitation leads and is concurrent with soil moisture estimates, for the tropics as a whole. Conversely a negative relationship is shown when soil moisture leads rainfall by a day or more. A weak positive relationship between precipitation and temperature is shown when either leads by one day, whereas a weak negative relationship is shown over the same time period between soil moisture and temperature. Temporally, in terms of lag and lead relationships, the models appear to be in agreement on the overall patterns of correlation between rainfall and soil moisture. However, in terms of spatial patterns, a comparison of these relationships across all available models reveals considerable variability in the ability of the models to reproduce the correlations between precipitation and soil moisture. There is also a difference in the timings of the correlations, with some models showing the highest positive correlations when precipitation leads soil moisture by one day. Finally, the results suggest that there are 'hotspots' of high linear gradients between precipitation and soil moisture, corresponding to regions experiencing heavy rainfall. These results point to an inability of the CMIP5 models to simulate a positive feedback between soil moisture and precipitation at daily timescales. Longer timescale comparisons and experiments at higher spatial resolutions, where the impact of the spatial heterogeneity of rainfall on the initiation of convection and supply of moisture is included, would be expected to improve process understanding further.
Resumo:
This Editorial presents the focus, scope and policies of the inaugural issue of Nature Conservation, a new open access, peer-reviewed journal bridging natural sciences, social sciences and hands-on applications in conservation management. The journal covers all aspects of nature conservation and aims particularly at facilitating better interaction between scientists and practitioners. The journal will impose no restrictions on manuscript size or the use of colour. We will use an XML-based editorial workflow and several cutting-edge innovations in publishing and information dissemination. These include semantic mark-up of, and enhancements to published text, data, and extensive cross-linking within the journal and to external sources. We believe the journal will make an important contribution to better linking science and practice, offers rapid, peer-reviewed and flexible publication for authors and unrestricted access to content.
Resumo:
In order to best utilize the limited resource of medical resources, and to reduce the cost and improve the quality of medical treatment, we propose to build an interoperable regional healthcare systems among several levels of medical treatment organizations. In this paper, our approaches are as follows:(1) the ontology based approach is introduced as the methodology and technological solution for information integration; (2) the integration framework of data sharing among different organizations are proposed(3)the virtual database to realize data integration of hospital information system is established. Our methods realize the effective management and integration of the medical workflow and the mass information in the interoperable regional healthcare system. Furthermore, this research provides the interoperable regional healthcare system with characteristic of modularization, expansibility and the stability of the system is enhanced by hierarchy structure.
Resumo:
Novel acid-terminated hyperbranched polymers (HBPs) containing adipic acid and oxazoline monomers derived from oleic and linoleic acid have been synthesized via a bulk polymerization procedure. Branching was achieved as a consequence of an acid-catalyzed opening of the oxazoline ring to produce a trifunctional monomer in situ which delivered branching levels of >45% as determined by 1H and 13C NMR spectroscopy. The HBPs were soluble in common solvents, such as CHCl3, acetone, tetrahydrofuran, dimethylformamide, and dimethyl sulfoxide and were further functionalized by addition of citronellol to afford white-spirit soluble materials that could be used in coating formulations. During end group modification, a reduction in branching levels of the HBPs (down to 12–24%) was observed, predominantly on account of oxazoline ring reformation and trans-esterification processes under the reaction conditions used. In comparison to commercial alkyd resin paint coatings, formulations of the citronellol-functionalized hyperbranched materials blended with a commercial alkyd resin exhibited dramatic decreases of the blend viscosity when the HBP content was increased. The curing characteristics of the HBP/alkyd blend formulations were studied by dynamic mechanical analysis which revealed that the new coatings cured more quickly and produced tougher materials than otherwise identical coatings prepared from only the commercial alkyd resins.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.
Resumo:
There is a growing need for massive computational resources for the analysis of new astronomical datasets. To tackle this problem, we present here our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, AstroGrid) and the computa- tional grid (e.g. TeraGrid, COSMOS etc.). We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of compu- tational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We discuss our planned usages of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS data and mas- sive model-fitting of millions of CMBfast models to WMAP data. We also discuss other applications including the determination of the XMM Cluster Survey selection function and the construction of new WMAP maps.
Resumo:
We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, Astro- Grid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.