108 resultados para grid code
Resumo:
Environmental policy in the United Kingdom (UK) is witnessing a shift from command-and-control approaches towards more innovation-orientated environmental governance arrangements. These governance approaches are required which create institutions which support actors within a domain for learning not only about policy options, but also about their own interests and preferences. The need for construction actors to understand, engage and influence this process is critical to establishing policies which support innovation that satisfies each constituent’s needs. This capacity is particularly salient in an era where the expanding raft of environmental regulation is ushering in system-wide innovation in the construction sector. In this paper, the Code for Sustainable Homes (the Code) in the UK is used to demonstrate the emergence and operation of these new governance arrangements. The Code sets out a significant innovation challenge for the house-building sector with, for example, a requirement that all new houses must be zero-carbon by 2016. Drawing upon boundary organisation theory, the journey from the Code as a government aspiration, to the Code as a catalyst for the formation of the Zero Carbon Hub, a new institution, is traced and discussed. The case study reveals that the ZCH has demonstrated boundary organisation properties in its ability to be flexible to the needs and constraints of its constituent actors, yet robust enough to maintain and promote a common identity across regulation and industry boundaries.
Resumo:
The arbitrarily structured C-grid, TRiSK (Thuburn, Ringler, Skamarock and Klemp, 2009, 2010) is being used in the ``Model for Prediction Across Scales'' (MPAS) and is being considered by the UK Met Office for their next dynamical core. However the hexagonal C-grid supports a branch of spurious Rossby modes which lead to erroneous grid-scale oscillations of potential vorticity (PV). It is shown how these modes can be harmlessly controlled by using upwind-biased interpolation schemes for PV. A number of existing advection schemes for PV are tested, including that used in MPAS, and none are found to give adequate results for all grids and all cases. Therefore a new scheme is proposed; continuous, linear-upwind stabilised transport (CLUST), a blend between centred and linear-upwind with the blend dependent on the flow direction with respect to the cell edge. A diagnostic of grid-scale oscillations is proposed which gives further discrimination between schemes than using potential enstrophy alone and indeed some schemes are found to destroy potential enstrophy while grid-scale oscillations grow. CLUST performs well on hexagonal-icosahedral grids and unrotated skipped latitude-longitude grids of the sphere for various shallow water test cases. Despite the computational modes, the hexagonal icosahedral grid performs well since these modes are easy and harmless to filter. As a result TRiSK appears to perform better than a spectral shallow water model.
Resumo:
This is one of the first papers in which arguments are given to treat code-switching and borrowing as similar phenomena. It is argued that it is theoretically undesirable to distinguish both phenomena, and empirically very problematic. A probabilistic account of code-switching and a hierarchy of switched constituents (similar to hierarchies of borrowability) are proposed which account for the fact that some constituents are more likely to be borrowed/switched than others. It is argued that the same kinds of constraints apply to both code-switching and borrowing.
Resumo:
With the increasing awareness of protein folding disorders, the explosion of genomic information, and the need for efficient ways to predict protein structure, protein folding and unfolding has become a central issue in molecular sciences research. Molecular dynamics computer simulations are increasingly employed to understand the folding and unfolding of proteins. Running protein unfolding simulations is computationally expensive and finding ways to enhance performance is a grid issue on its own. However, more and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. This paper describes efforts to provide a grid-enabled data warehouse for protein unfolding data. We outline the challenge and present first results in the design and implementation of the data warehouse.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
Although the tube theory is successful in describing entangled polymers qualitatively, a more quantitative description requires precise and consistent definitions of its parameters. Here we investigate the simplest model of entangled polymers, namely a single Rouse chain in a cubic lattice of line obstacles, and illustrate the typical problems and uncertainties of the tube theory. In particular we show that in general one needs 3 entanglement related parameters, but only 2 combinations of them are relevant for the long-time dynamics. Conversely, the plateau modulus can not be determined from these two parameters and requires a more detailed model of entanglements with explicit entanglement forces, such as the slipsprings model. It is shown that for the grid model the Rouse time within the tube is larger than the Rouse time of the free chain, in contrast to what the standard tube theory assumes.
Resumo:
Stakeholder analysis plays a critical role in business analysis. However, the majority of the stakeholder identification and analysis methods focus on the activities and processes and ignore the artefacts being processed by human beings. By focusing on the outputs of the organisation, an artefact-centric view helps create a network of artefacts, and a component-based structure of the organisation and its supply chain participants. Since the relationship is based on the components, i.e. after the stakeholders are identified, the interdependency between stakeholders and the focal organisation can be measured. Each stakeholder is associated with two types of dependency, namely the stakeholder’s dependency on the focal organisation and the focal organisation’s dependency on the stakeholder. We identify three factors for each type of dependency and propose the equations that calculate the dependency indexes. Once both types of the dependency indexes are calculated, each stakeholder can be placed and categorised into one of the four groups, namely critical stakeholder, mutual benefits stakeholder, replaceable stakeholder, and easy care stakeholder. The mutual dependency grid and the dependency gap analysis, which further investigates the priority of each stakeholder by calculating the weighted dependency gap between the focal organisation and the stakeholder, subsequently help the focal organisation to better understand its stakeholders and manage its stakeholder relationships.
Resumo:
We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N logN operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.
Resumo:
We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, Astro- Grid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.
Resumo:
In any wide-area distributed system there is a need to communicate and interact with a range of networked devices and services ranging from computer-based ones (CPU, memory and disk), to network components (hubs, routers, gateways) and specialised data sources (embedded devices, sensors, data-feeds). In order for the ensemble of underlying technologies to provide an environment suitable for virtual organisations to flourish, the resources that comprise the fabric of the Grid must be monitored in a seamless manner that abstracts away from the underlying complexity. Furthermore, as various competing Grid middleware offerings are released and evolve, an independent overarching monitoring service should act as a corner stone that ties these systems together. GridRM is a standards-based approach that is independent of any given middleware and that can utilise legacy and emerging resource-monitoring technologies. The main objective of the project is to produce a standardised and extensible architecture that provides seamless mechanisms to interact with native monitoring agents across heterogeneous resources.
Resumo:
Monitoring resources is an important aspect of the overall efficient usage and control of any distributed system. In this paper, we describe a generic open-source resource monitoring architecture that has been specifically designed for the Grid. The paper consists of three main sections. In the first section, we outline our motivation and briefly detail similar work in the area. In the second section, we describe the general monitoring architecture and its components. In the final section of the paper, we summarise the experiences so far and outline our future work.