932 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we have made significant contributions in three different areas of interest: therapeutic protein stabilization, thermodynamics of natural gas clathrate-hydrates, and zeolite catalysis. In all three fields, using our various computational techniques, we have been able to elucidate phenomena that are difficult or impossible to explain experimentally. More specifically, in mixed solvent systems for proteins we developed a statistical-mechanical method to model the thermodynamic effects of additives in molecular-level detail. It was the first method demonstrated to have truly predictive (no adjustable parameters) capability for real protein systems. We also describe a novel mechanism that slows protein association reactions, called the “gap effect.” We developed a comprehensive picture of methioine oxidation by hydrogen peroxide that allows for accurate prediction of protein oxidation and provides a rationale for developing strategies to control oxidation. The method of solvent accessible area (SAA) was shown not to correlate well with oxidation rates. A new property, averaged two-shell water coordination number (2SWCN) was identified and shown to correlate well with oxidation rates. Reference parameters for the van der Waals Platteeuw model of clathrate-hydrates were found for structure I and structure II. These reference parameters are independent of the potential form (unlike the commonly used parameters) and have been validated by calculating phase behavior and structural transitions for mixed hydrate systems. These calculations are validated with experimental data for both structures and for systems that undergo transitions from one structure to another. This is the first method of calculating hydrate thermodynamics to demonstrate predictive capability for phase equilibria, structural changes, and occupancy in pure and mixed hydrate systems. We have computed a new mechanism for the methanol coupling reaction to form ethanol and water in the zeolite chabazite. The mechanism at 400°C proceeds via stable intermediates of water, methane, and protonated formaldehyde.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Es defineix l'expansió general d'operadors com una combinació lineal de projectors i s'exposa la seva aplicació generalitzada al càlcul d'integrals moleculars. Com a exemple numèric, es fa l'aplicació al càlcul d'integrals de repulsió electrònica entre quatre funcions de tipus s centrades en punts diferents, i es mostren tant resultats del càlcul com la definició d'escalat respecte a un valor de referència, que facilitarà el procés d'optimització de l'expansió per uns paràmetres arbitraris. Es donen resultats ajustats al valor exacte

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless Senor Networks(WSNs) detect events using one or more sensors, then collect data from detected events using these sensors. This data is aggregated and forwarded to a base station(sink) through wireless communication to provide the required operations. Different kinds of MAC and routing protocols need to be designed for WSN in order to guarantee data delivery from the source nodes to the sink. Some of the proposed MAC protocols for WSN with their techniques, advantages and disadvantages in the terms of their suitability for real time applications are discussed in this paper. We have concluded that most of these protocols can not be applied to real time applications without improvement

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Datasets containing information to locate and identify water bodies have been generated from data locating static-water-bodies with resolution of about 300 m (1/360 deg) recently released by the Land Cover Climate Change Initiative (LC CCI) of the European Space Agency. The LC CCI water-bodies dataset has been obtained from multi-temporal metrics based on time series of the backscattered intensity recorded by ASAR on Envisat between 2005 and 2010. The new derived datasets provide coherently: distance to land, distance to water, water-body identifiers and lake-centre locations. The water-body identifier dataset locates the water bodies assigning the identifiers of the Global Lakes and Wetlands Database (GLWD), and lake centres are defined for in-land waters for which GLWD IDs were determined. The new datasets therefore link recent lake/reservoir/wetlands extent to the GLWD, together with a set of coordinates which locates unambiguously the water bodies in the database. Information on distance-to-land for each water cell and the distance-to-water for each land cell has many potential applications in remote sensing, where the applicability of geophysical retrieval algorithms may be affected by the presence of water or land within a satellite field of view (image pixel). During the generation and validation of the datasets some limitations of the GLWD database and of the LC CCI water-bodies mask have been found. Some examples of the inaccuracies/limitations are presented and discussed. Temporal change in water-body extent is common. Future versions of the LC CCI dataset are planned to represent temporal variation, and this will permit these derived datasets to be updated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel execution is a very efficient means of processing vast amounts of data in a small amount of time. Creating parallel applications has never been easy, and requires much knowledge of the task and the execution environment used to execute parallel processes. The process of creating parallel applications can be made easier through using a compiler that automatically parallelises a supplied application. Executing the parallel application is also simplified when a well designed execution environment is used. Such an execution environment provides very powerful operations to the programmer transparently. Combining both a parallelising compiler and execution environment and providing a fully automated parallelisation and execution tool is the aim of this research. The advantage of using such a fully automated tool is that the user does not need to provide any additional input to gain the benefits of parallel execution. This report shows the tool and how it transparently supports the programmer creating parallel applications and supports their execution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current scientific applications are often structured as workflows and rely on workflow systems to compile abstract experiment designs into enactable workflows that utilise the best available resources. The automation of this step and of the workflow enactment, hides the details of how results have been produced. Knowing how compilation and enactment occurred allows results to be reconnected with the experiment design. We investigate how provenance helps scientists to connect their results with the actual execution that took place, their original experiment and its inputs and parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To highlight the importance of sampling and data collection  processes in qualitative interview studies, and to discuss the contribution of  these processes to determining the strength of the evidence generated and  thereby to decisions for public health practice and policy.

Approach:
This discussion is informed by a hierarchy-of-evidence-for-practice  model. The paper provides succinct guidelines for key sampling and data  collection considerations in qualitative research involving interview studies. The  importance of allowing time for immersion in a given community to become  familiar with the context and population is discussed, as well as the practical  constraints that sometimes operate against this stage. The role of theory in  guiding sample selection is discussed both in terms of identifying likely sources  of rich data and in understanding the issues emerging from the data. It is noted  that sampling further assists in confirming the developing evidence and also  illuminates data that does not seem to fit. The importance of reporting sampling  and data collection processes is highlighted clearly to enable others to assess  both the strength of the evidence and the broader applications of the findings.

Conclusion:
Sampling and data collection processes are critical to determining  the quality of a study and the generalisability of the findings. We argue that  these processes should operate within the parameters of the research goal, be  guided by emerging theoretical considerations, cover a range of relevant   participant perspectives, and be clearly outlined in research reports with an  explanation of any research limitations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The peer-to-peer content distribution network (PCDN) is a hot topic recently, and it has a huge potential for massive data intensive applications on the Internet. One of the challenges in PCDN is routing for data sources and data deliveries. In this paper, we studied a type of network model which is formed by dynamic autonomy area, structured source servers and proxy servers. Based on this network model, we proposed a number of algorithms to address the routing and data delivery issues. According to the highly dynamics of the autonomy area, we established dynamic tree structure proliferation system routing, proxy routing and resource searching algorithms. The simulations results showed that the performance of the proposed network model and the algorithms are stable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Focuses on two areas within the field of general relativity. Firstly, the history and implications of the long-standing conjecture that general relativistic, shear-free perfect fluids which obey a barotropic equation of state p = p(w) such that w + p = 0, are either non-expanding or non-rotating. Secondly the application of the computer algebra system Maple to the area of tetrad formalisms in general relativity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The agent paradigm has been successfully used in a large number of research areas. MAPFS, a parallel file system, constitutes one successful application of agents to the I/O field, providing a multiagent I/O architecture. The use of a multiagent system implies coordination and cooperation among its agents. MAPFS is oriented to clusters of workstations, where agents are applied in order to provide features such as caching or prefetching. The adaptation of MAPFS to a grid environment is named MAPFS-Grid. Agents can help to increase the performance of data-intensive applications running on top of the grid.

This paper describes the conceptual agent framework and the communication model used in MAPFS-Grid, which provides the management of data resources in a grid environment. The evaluation of our proposal shows the advantages of using agents in a data grid.