900 resultados para Web services. Service Composition. PEWS. Runtime systems
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
Managing a construction project supply chain effectively and efficiently is extremely difficult due to involvement of numerous sectors that are supported by ineffective communication system. An efficient construction supply chain system ensures the delivery of materials and other services to construction site while minimising costs and rewarding all sectors based on value added to the supply chain. The advancement of information, communication and wireless technologies is driving construction companies to deploy supply chain management strategies to seek better outputs. As part of the emerging wireless technologies, contextaware computing capability represents the next generation of ICT to the construction services. Conceptually, context-awareness could be integrated with Web Services in order to ensure the delivery of pertinent information to construction site and enhance construction supply chain collaboration. An initial study has indicated that this integrated system has the potential of serving and improving the construction services delivery through access to context-specific data, information and services on as-needed basis.
Resumo:
With the rapid development of information technology, learners demand effective personalised learning support, which imposes a new learning paradigm in learning content management. Standards as well as best practice in industry and research community have taken place to address the paradigm shift. With respect to this trend, it is recognised that finding learning content which meet personal learning requirements remains challenging. This paper describes a model of e-learning services provision which integrates the best practice in e-learning and Web services technology so that learning content management is capable of supporting applications of learning services.
Resumo:
Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
By 2030, the world’s human population could rise to 8 billion people and world food demand may increase by 50%. Although food production outpaced population growth in the 20th century, it is clear that the environmental costs of these increases cannot be sustained into the future. This challenges us to re-think the way we produce food. We argue that viewing food production systems within an ecosystems context provides the basis for 21st century food production. An ecosystems view recognises that food production systems depend on ecosystem services but also have ecosystem impacts. These dependencies and impacts are often poorly understood by many people and frequently overlooked. We provide an overview of the key ecosystem services involved in different food production systems, including crop and livestock production, aquaculture and the harvesting of wild nature. We highlight the important ecosystem impacts of food production systems, including habitat loss and degradation, changes to water and nutrient cycles across a range of scales, and biodiversity loss. These impacts often undermine the very ecosystem services on which food production systems depend, as well as other ecosystem services unrelated to food. We argue that addressing these impacts requires us to re-design food production systems to recognise and manage the limitations on production imposed by the ecosystems within which they are embedded, and increasingly embrace a more multifunctional view of food production systems and associated ecosystems. In this way, we should be able to produce food more sustainably whilst inflicting less damage on other important ecosystem services.
Resumo:
There is increasing concern that the intensification of dairy production reduces the concentrations of nutritionally desirable compounds in milk. This study therefore compared important quality parameters (protein and fatty acid profiles; α-tocopherol and carotenoid concentrations) in milk from four dairy systems with contrasting production intensities (in terms of feeding regimens and milking systems). The concentrations of several nutritionally desirable compounds (β-lactoglobulin, omega-3 fatty acids, omega-3/omega-6 ratio, conjugated linoleic acid c9t11, and/or carotenoids) decreased with increasing feeding intensity (organic outdoor ≥ conventional outdoor ≥ conventional indoors). Milking system intensification (use of robotic milking parlors) had a more limited effect on milk composition, but increased mastitis incidence. Multivariate analyses indicated that differences in milk quality were mainly linked to contrasting feeding regimens and that milking system and breed choice also contributed to differences in milk composition between production systems.
Resumo:
Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded ( captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others ( e. g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP). Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded (captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others (e.g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP).
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
The objective of this study is to develop a Pollution Early Warning System (PEWS) for efficient management of water quality in oyster harvesting areas. To that end, this paper presents a web-enabled, user-friendly PEWS for managing water quality in oyster harvesting areas along Louisiana Gulf Coast, USA. The PEWS consists of (1) an Integrated Space-Ground Sensing System (ISGSS) gathering data for environmental factors influencing water quality, (2) an Artificial Neural Network (ANN) model for predicting the level of fecal coliform bacteria, and (3) a web-enabled, user-friendly Geographic Information System (GIS) platform for issuing water pollution advisories and managing oyster harvesting waters. The ISGSS (data acquisition system) collects near real-time environmental data from various sources, including NASA MODIS Terra and Aqua satellites and in-situ sensing stations managed by the USGS and the NOAA. The ANN model is developed using the ANN program in MATLAB Toolbox. The ANN model involves a total of 6 independent environmental variables, including rainfall, tide, wind, salinity, temperature, and weather type along with 8 different combinations of the independent variables. The ANN model is constructed and tested using environmental and bacteriological data collected monthly from 2001 – 2011 by Louisiana Molluscan Shellfish Program at seven oyster harvesting areas in Louisiana Coast, USA. The ANN model is capable of explaining about 76% of variation in fecal coliform levels for model training data and 44% for independent data. The web-based GIS platform is developed using ArcView GIS and ArcIMS. The web-based GIS system can be employed for mapping fecal coliform levels, predicted by the ANN model, and potential risks of norovirus outbreaks in oyster harvesting waters. The PEWS is able to inform decision-makers of potential risks of fecal pollution and virus outbreak on a daily basis, greatly reducing the risk of contaminated oysters to human health.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
Este relatório consolida os trabalhos de pesquisa, desenvolvidos entre abril de 2005 e abril de 2006, sobre o estado de adoção e oportunidades para uso de novas tecnologias de informação em processos de governo. A ampliação de fronteiras, para além dos limites tradicionais das organizações, traz uma nova e mais forte demanda por flexibilidade que possibilite o tratamento integrado de organismos de diferentes constituições, arquiteturas e processos operacionais, sem falar nos diferentes sistemas de informações. Isto é ainda mais importante nas organizações públicas. Por outro lado, uma das principais características negativas dos órgãos públicos é a morosidade e a burocracia nos processos administrativos e de atendimento ao cidadão. A falta de uma visão tecnológica moderna, isto é, a falta de um Plano Diretor de Tecnologia da Informação (PDTI) voltada para novas soluções, como é o caso do BPM, alinhada à falta de integração entre os sistemas e processos, faz com que muitos órgãos governamentais estejam caminhando na contramão do desenvolvimento tecnológico. Este projeto de pesquisa reveste-se, portanto, de alto interesse, pois focaliza as possibilidades e impactos da adoção das novas tecnologias orientadas a processos e web services (BPM - Business Process Management e BPMS - Business Process Management Systems) na área governamental, bastante desprovida de soluções integradas de serviços aos cidadãos e empresas. Estas novas tecnologias trazem paradigmas completamente diferentes dos até aqui adotados na implementação de sistemas de informações e automação de processos. Apesar das dificuldades inerentes ao tratamento de um tema complexo e novo, mais ainda em organismos governamentais, acreditamos ter desenvolvido um trabalho bastante aprofundado, atendendo aos objetivos estabelecidos no plano original, com os necessários acertos de rota e foco dos trabalhos. Cremos, também, que este trabalho estabelece uma referência relevante no conhecimento relacionados à melhoria de processos de governo, com base em novas tecnologias. Como sub-produtos planejados e realizados, inseridos no caderno de anexos a este relatório, estão conteúdos já desenvolvidos para a edição um ou dois livros sobre o tema, diversos artigos produzidos, além de diversos eventos realizados na EAESP, envolvendo o tema do projeto, que proporcionaram a oportunidade de excelentes trocas de experiências. Este relatório, apresentado de forma objetiva e sintética, focalizando somente os principais aspectos tratados, é complementado por um extenso conteúdo complementar, entregue em um caderno de Anexos.
Resumo:
The spread of the Web boosted the dissemination of Information Systems (IS) based on the Web. In order to support the implementation of these systems, several technologies came up or evolved with this purpose, namely the programming languages. The Technology Acceptance Model TAM (Davis, 1986) was conceived aiming to evaluate the acceptance/use of information technologies by their users. A lot of studies and many applications have used the TAM, however, in the literature it was not found a mention of the use of such model related to the use of programming languages. This study aims to investigate which factors influence the use of programming languages on the development of Web systems by their developers, applying an extension of the TAM, proposed in this work. To do so, a research was done with Web developers in two Yahoo groups: java-br and python-brasil, where 26 Java questionnaires and 39 Python questionnaires were fully answered. The questionnaire had general questions and questions which measured intrinsic and extrinsic factors of the programming languages, the perceived usefulness, the perceived ease of use, the attitude toward the using and the programming language use. Most of the respondents were men, graduate, between 20 and 30 years old, working in the southeast and south regions. The research was descriptive in the sense of its objectives. Statistical tools, descriptive statistics, main components and linear regression analysis were used for the data analysis. The foremost research results were: Java and Python have machine independence, extensibility, generality and reliability; Java and Python are more used by corporations and international organizations than supported by the government or educational institutions; there are more Java programmers than Python programmers; the perceived usefulness is influenced by the perceived ease of use; the generality and the extensibility are intrinsic factors of programming languages which influence the perceived ease of use; the perceived ease of use influences the attitude toward the using of the programming language
Resumo:
The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones
Resumo:
In the last years there was an exponential growth in the offering of Web-enabled distance courses and in the number of enrolments in corporate and higher education using this modality. However, the lack of efficient mechanisms that assures user authentication in this sort of environment, in the system login as well as throughout his session, has been pointed out as a serious deficiency. Some studies have been led about possible biometric applications for web authentication. However, password based authentication still prevails. With the popularization of biometric enabled devices and resultant fall of prices for the collection of biometric traits, biometrics is reconsidered as a secure remote authentication form for web applications. In this work, the face recognition accuracy, captured on-line by a webcam in Internet environment, is investigated, simulating the natural interaction of a person in the context of a distance course environment. Partial results show that this technique can be successfully applied to confirm the presence of users throughout the course attendance in an educational distance course. An efficient client/server architecture is also proposed. © 2009 Springer Berlin Heidelberg.