849 resultados para Service Programming Model
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
The Canadian Middle Atmosphere Modelling (MAM) project is a collaboration between thé Atmospheric Environment Service (AES) of Environment Canada and several Canadian universities. Its goal is thé development of a comprehensive General Circulation Model of the troposphere-stratosphere-mesosphere System, starting from the AES/CCCma third-generation atmospheric General Circulation Model. This paper describes the basic features of the first-generation Canadian MAM and some aspects of its radiative-dynamical climatology. Standard first-order mean diagnostics are presented for monthly means and for the annual cycle of zonal-mean winds and temperatures. The mean meridional circulation is examined, and comparison is made between thé steady diabatic, downward controlled, and residual stream functions. It is found that downward control holds quite well in the monthly mean through most of the middle atmosphere, even during equinoctal periods. The relative roles of different drag processes in determining the mean downwelling over the wintertime polar middle stratosphere is examined, and the vertical structure of the drag is quantified.
Resumo:
This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.
Resumo:
Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.
Resumo:
Polymers which can respond to externally applied stimuli have found much application in the biomedical field due to their (reversible) coil–globule transitions. Polymers displaying a lower critical solution temperature are the most commonly used, but for blood-borne (i.e., soluble) biomedical applications the application of heat is not always possible, nor practical. Here we report the design and synthesis of poly(oligoethylene glycol methacrylate)-based polymers whose cloud points are easily varied by alkaline phosphatase-mediated dephosphorylation. By fine-tuning the density of phosphate groups on the backbone, it was possible to induce an isothermal transition: A change in solubility triggered by removal of a small number of phosphate esters from the side chains activating the LCST-type response. As there was no temperature change involved, this serves as a model of a cell-instructed polymer response. Finally, it was found that both polymers were non cytotoxic against MCF-7 cells (at 1 mg·mL–1), which confirms promise for biomedical applications.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.
Resumo:
Trust is one of the most important factors that influence the successful application of network service environments, such as e-commerce, wireless sensor networks, and online social networks. Computation models associated with trust and reputation have been paid special attention in both computer societies and service science in recent years. In this paper, a dynamical computation model of reputation for B2C e-commerce is proposed. Firstly, conceptions associated with trust and reputation are introduced, and the mathematical formula of trust for B2C e-commerce is given. Then a dynamical computation model of reputation is further proposed based on the conception of trust and the relationship between trust and reputation. In the proposed model, classical varying processes of reputation of B2C e-commerce are discussed. Furthermore, the iterative trust and reputation computation models are formulated via a set of difference equations based on the closed-loop feedback mechanism. Finally, a group of numerical simulation experiments are performed to illustrate the proposed model of trust and reputation. Experimental results show that the proposed model is effective in simulating the dynamical processes of trust and reputation for B2C e-commerce.
Resumo:
Purpose – Recognizing the heterogeneity of services, this paper aims to clarify the characteristics of forward and the corresponding reverse supply chains of different services. Design/methodology/approach – The paper develops a two-dimensional typology matrix, representing four main clusters of services according to the degree of input standardization and the degree of output tangibility. Based on this matrix, this paper develops a typology and parsimonious conceptual models illustrating the characteristics of forward and the corresponding reverse supply chains of each cluster of services. Findings – The four main clusters of service supply chains have different characteristics. This provides the basis for the identification, presentation and explanation of the different characteristics of their corresponding reverse service supply chains. Research limitations/implications – The findings of this research can help future researchers to analyse, map and model forward and reverse service supply chains, and to identify potential research gaps in the area. Practical/implications – The findings of the research can help managers of service firms to gain better visibility of their forward and reverse supply chains, and refine their business models to help extend their reverse/closed-loop activities. Furthermore, the findings can help managers to better optimize their service operations to reduce service gaps and potentially secure new value-adding opportunities. Originality/value – This paper is the first, to the authors ' knowledge, to conceptualize the basic structure of the forward and reverse service supply chains while dealing with the high level of heterogeneity of services.
Resumo:
Low birth weight has been associated with increased obesity in adulthood. It has been shown that dietary salt restriction during intrauterine life induces low birth weight and insulin resistance in adult Wistar rats. The present study had a two-fold objective: to evaluate the effects that low salt intake during pregnancy and lactation has on the amount and distribution of adipose tissue; and to determine whether the phenotypic changes in fat mass in this model are associated with alterations in the activity of the renin-angiotensin system. Maternal salt restriction was found to reduce birth weight in male and female offspring. In adulthood, the female offspring of dams fed the low-salt diet presented higher adiposity indices than those seen in the offspring of dams fed a normal-salt diet. This was attributed to the fact that adipose tissue mass (retroperitoneal but not gonadal, mesenteric or inguinal) was greater in those rats than in the offspring of dams fed a normal diet. The adult offspring of dams fed the low-salt diet, compared to those dams fed a normal-salt diet, presented the following: plasma leptin levels higher in males and lower in females; plasma renin activity higher in males but not in females; and no differences in body weight, mean arterial blood pressure or serum angiotensin-converting enzyme activity. Therefore, low salt intake during pregnancy might lead to the programming of obesity in adult female offspring. (c) 2009 Elsevier Inc. All rights reserved.
Resumo:
Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The p-median problem is often used to locate p service centers by minimizing their distances to a geographically distributed demand (n). The optimal locations are sensitive to geographical context such as road network and demand points especially when they are asymmetrically distributed in the plane. Most studies focus on evaluating performances of the p-median model when p and n vary. To our knowledge this is not a very well-studied problem when the road network is alternated especially when it is applied in a real world context. The aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the density in the road network is alternated. The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 service centers we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000. To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when nodes in the road network increase and p is low. When p is high the improvements are larger. The results also show that choice of the best network depends on p. The larger p the larger density of the network is needed.
Resumo:
A major problem in e-service development is the prioritization of the requirements of different stakeholders. The main stakeholders are governments and their citizens, all of whom have different and sometimes conflicting requirements. In this paper, the prioritization problem is addressed by combining a value-based approach with an illustration technique. This paper examines the following research question: How can multiple stakeholder requirements be illustrated from a value-based perspective in order to be prioritizable? We used an e-service development case taken from a Swedish municipality to elaborate on our approach. Our contributions are: 1) a model of the relevant domains for requirement prioritization for government, citizens, technology, finances and laws and regulations; and 2) a requirement fulfillment analysis tool (RFA) that consists of a requirement-goal-value matrix (RGV), and a calculation and illustration module (CIM). The model reduces cognitive load, helps developers to focus on value fulfillment in e-service development and supports them in the formulation of requirements. It also offers an input to public policy makers, should they aim to target values in the design of e-services.
Resumo:
Background: In Chile, mothers and newborns are separated after caesarean sections. The caesarean section rate in Chile is approximately 40%. Once separated, newborns will miss out on the benefits of early contact unless a suitable model of early newborn contact after caesarean section is initiated. Aim: To describe mothers experiences and perceptions of a continuous parental model of newborn care after caesarean section during mother-infant separation. Methods: A questionnaire with 4 open ended questions to gather data on the experiences and perceptions of 95 mothers in the obstetric service of Sótero Del Rio Hospital in Chile between 2009 and 2012. Data were analyzed using qualitative content analysis. Results: One theme family friendly practice after caesarean section and four categories. Mothers described the benefits of this model of caring. The fathers presence was important to mother and baby. Mothers were reassured that the baby was not left alone with staff. It was important for the mothers to see that the father could love the baby as much as the mother. This model of care helped create ties between the father and newborn during the period of mother-infant separation and later with the mother. Conclusions: Family friendly practice after caesarean section was an important health care intervention for the whole family. This model could be stratified in the Chilean context in the case of complicated births and all caesarean sections. Clinical Implications: In the Chilean context, there is the potential to increase the number of parents who get to hold their baby immediately after birth and for as long as they like. When the mother and infant are separated after birth, parents can be informed about the benefits of this caring model. Further research using randomized control trials may support biological advantages.