925 resultados para Consortium
Resumo:
This paper is based on alkyl nitrate measurements made over the North Atlantic as part of the International Consortium for Research on Atmospheric Transport and Transformation (ICARTT). The focus is on the analysis of air samples collected on the UK BAe-146 aircraft during the Intercontinental Transport of Ozone and Precursors (ITOP) project, but air samples collected on board the NASA DC-8 and NOAA WP-3D aircraft as part of a Lagrangian experiment are also used. The ratios between the alkyl nitrates and their parent hydrocarbons are compared with those expected from chemical theory. Further, a box model is run to investigate the temporal evolution of the alkyl nitrates in three Lagrangian case studies and compared to observations. The air samples collected during ITOP do not appear to be strongly influenced by oceanic sources, but rather are influenced by emissions from the N.E. United States and from Alaskan fires. There also appears to be a widespread common source of ethyl nitrate and 1-propyl nitrate other than from their parent hydrocarbons. The general agreement between the alkyl nitrate data and photochemical theory suggests that during the first few days of transport from the source region, photochemical production of alkyl nitrates, and thus ozone, had taken place. The observations in the more photochemically processed air masses are consistent with the alkyl nitrate production reactions no longer dominating the peroxy radical self/cross reactions. Further, the results also suggest that the rates of photochemical processing in the Alaskan smoke plumes were small.
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s
Resumo:
We consider a class of boundary integral equations that arise in the study of strongly elliptic BVPs in unbounded domains of the form $D = \{(x, z)\in \mathbb{R}^{n+1} : x\in \mathbb{R}^n, z > f(x)\}$ where $f : \mathbb{R}^n \to\mathbb{R}$ is a sufficiently smooth bounded and continuous function. A number of specific problems of this type, for example acoustic scattering problems, problems involving elastic waves, and problems in potential theory, have been reformulated as second kind integral equations $u+Ku = v$ in the space $BC$ of bounded, continuous functions. Having recourse to the so-called limit operator method, we address two questions for the operator $A = I + K$ under consideration, with an emphasis on the function space setting $BC$. Firstly, under which conditions is $A$ a Fredholm operator, and, secondly, when is the finite section method applicable to $A$?
Condition number estimates for combined potential boundary integral operators in acoustic scattering
Resumo:
We study the classical combined field integral equation formulations for time-harmonic acoustic scattering by a sound soft bounded obstacle, namely the indirect formulation due to Brakhage-Werner/Leis/Panic, and the direct formulation associated with the names of Burton and Miller. We obtain lower and upper bounds on the condition numbers for these formulations, emphasising dependence on the frequency, the geometry of the scatterer, and the coupling parameter. Of independent interest we also obtain upper and lower bounds on the norms of two oscillatory integral operators, namely the classical acoustic single- and double-layer potential operators.
Resumo:
The human gut microbiota comprises a diverse microbial consortium closely co-evolved with the human genome and diet. The importance of the gut microbiota in regulating human health and disease has however been largely overlooked due to the inaccessibility of the intestinal habitat, the complexity of the gut microbiota itself and the fact that many of its members resist cultivation and are in fact new to science. However, with the emergence of 16S rRNA molecular tools and "post-genomics" high resolution technologies for examining microorganisms as they occur in nature without the need for prior laboratory culture, this limited view of the gut microbiota is rapidly changing. This review will discuss the application of molecular microbiological tools to study the human gut microbiota in a culture independent manner. Genomics or metagenomics approaches have a tremendous capability to generate compositional data and to measure the metabolic potential encoded by the combined genomes of the gut microbiota. Another post-genomics approach, metabonomics, has the capacity to measure the metabolic kinetic or flux of metabolites through an ecosystem at a particular point in time or over a time course. Metabonomics thus derives data on the function of the gut microbiota in situ and how it responds to different environmental stimuli e. g. substrates like prebiotics, antibiotics and other drugs and in response to disease. Recently these two culture independent, high resolution approaches have been combined into a single "transgenomic" approach which allows correlation of changes in metabolite profiles within human biofluids with microbiota compositional metagenomic data. Such approaches are providing novel insight into the composition, function and evolution of our gut microbiota.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Intercontinental Transport of Ozone and Precursors (ITOP) (part of International Consortium for Atmospheric Research on Transport and Transformation (ICARTT)) was an intense research effort to measure long-range transport of pollution across the North Atlantic and its impact on O3 production. During the aircraft campaign plumes were encountered containing large concentrations of CO plus other tracers and aerosols from forest fires in Alaska and Canada. A chemical transport model, p-TOMCAT, and new biomass burning emissions inventories are used to study the emissions long-range transport and their impact on the troposphere O3 budget. The fire plume structure is modeled well over long distances until it encounters convection over Europe. The CO values within the simulated plumes closely match aircraft measurements near North America and over the Atlantic and have good agreement with MOPITT CO data. O3 and NOx values were initially too great in the model plumes. However, by including additional vertical mixing of O3 above the fires, and using a lower NO2/CO emission ratio (0.008) for boreal fires, O3 concentrations are reduced closer to aircraft measurements, with NO2 closer to SCIAMACHY data. Too little PAN is produced within the simulated plumes, and our VOC scheme's simplicity may be another reason for O3 and NOx model-data discrepancies. In the p-TOMCAT simulations the fire emissions lead to increased tropospheric O3 over North America, the north Atlantic and western Europe from photochemical production and transport. The increased O3 over the Northern Hemisphere in the simulations reaches a peak in July 2004 in the range 2.0 to 6.2 Tg over a baseline of about 150 Tg.
Resumo:
Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.
Resumo:
Platelet endothelial cell adhesion molecule-1 (PECAM-1) inhibits platelet response to collagen and may also inhibit two other major platelet agonists ADP and thrombin although this has been less well explored. We hypothesized that the combined effect of inhibiting these three platelet activating pathways may act to significantly inhibit thrombus formation. We demonstrate a negative relationship between PECAM-1 surface expression and platelet response to cross-linked collagen related peptide (CRP-XL) and ADP, and an inhibitory effect of PECAM-1 clustering on platelet response to CRP-XL, ADP and thrombin. This combined inhibition of multiple signaling pathways results in a marked reduction in thrombus formation. (C) 2009 Federation of European Biochemical Societies. Published by Elsevier B. V. All rights reserved.
Resumo:
The production of sufficient quantities of protein is an essential prelude to a structure determination, but for many viral and human proteins this cannot be achieved using prokaryotic expression systems. Groups in the Structural Proteomics In Europe ( SPINE) consortium have developed and implemented high- throughput ( HTP) methodologies for cloning, expression screening and protein production in eukaryotic systems. Studies focused on three systems: yeast ( Pichia pastoris and Saccharomyces cerevisiae), baculovirusinfected insect cells and transient expression in mammalian cells. Suitable vectors for HTP cloning are described and results from their use in expression screening and protein-production pipelines are reported. Strategies for coexpression, selenomethionine labelling ( in all three eukaryotic systems) and control of glycosylation ( for secreted proteins in mammalian cells) are assessed.
Resumo:
Building refurbishment is key to reducing the carbon footprint and improving comfort in the built environment. However, quantifying the real benefit of a facade change, which can bring advantages to owners (value), occupants (comfort) and the society (sustainability), is not a simple task. At a building physics level, the changes in kWh per m2 of heating / cooling load can be readily quantified. However, there are many subtle layers of operation and mainte-nance below these headline figures which determine how sustainable a building is in reality, such as for example quality of life factors. This paper considers the range of approached taken by a fa/e refurbishment consortium to assess refurbishment solutions for multi-storey, multi-occupancy buildings and how to critically evaluate them. Each of the applued tools spans one or more of the three building parameters of people, product and process. 'De-cision making' analytical network process and parametric building analysis tools are described and their potential impact on the building refurbishment process evaluated.
Resumo:
Firms form consortia in order to win contracts. Once a project has been awarded to a consortium each member then concentrates on his or her own contract with the client. Therefore, consortia are marketing devices, which present the impression of teamworking, but the production process is just as fragmented as under conventional procurement methods. In this way, the consortium forms a barrier between the client and the actual construction production process. Firms form consortia, not as a simple development of normal ways of working, but because the circumstances for specific projects make it a necessary vehicle. These circumstances include projects that are too large or too complex to undertake alone or projects that require on-going services which cannot be provided by the individual firms inhouse. It is not a preferred way of working, because participants carry extra risk in the form of liability for the actions of their partners in the consortium. The behaviour of members of consortia is determined by their relative power, based on several factors, including financial commitment and ease of replacement. The level of supply chain visibility to the public sector client and to the industry is reduced by the existence of a consortium because the consortium forms an additional obstacle between the client and the firms undertaking the actual construction work. Supply chain visibility matters to the client who otherwise loses control over the process of construction or service provision, while remaining accountable for cost overruns. To overcome this separation there is a convincing argument in favour of adopting the approach put forward in the Project Partnering Contract 2000 (PPC2000) Agreement. Members of consortia do not necessarily go on to work in the same consortia again because members need to respond flexibly to opportunities as and when they arise. Decision-making processes within consortia tend to be on an ad hoc basis. Construction risk is taken by the contractor and the construction supply chain but the reputational risk is carried by all the firms associated with a consortium. There is a wide variation in the manner that consortia are formed, determined by the individual circumstances of each project; its requirements, size and complexity, and the attitude of individual project leaders. However, there are a number of close working relationships based on generic models of consortia-like arrangements for the purpose of building production, such as the Housing Corporation Guidance Notes and the PPC2000.
Resumo:
Attempts to reduce the energy consumed in UK homes have met with limited success. One reason for this is a lack of understanding of how people interact with domestic technology – heating systems, lights, electrical equipment and so forth. Attaining such an understanding is hampered by a chronic shortage of detailed energy use data matched to descriptions of the house, the occupants, the internal conditions and the installed services and appliances. Without such information it is impossible to produce transparent and valid models for understanding and predicting energy use. The Carbon Reduction in Buildings (CaRB) consortium of five UK universities plans to develop socio-technical models of energy use, underpinned by a flow of data from a longitudinal monitoring campaign involving several hundred UK homes. This paper outlines the models proposed, the preliminary monitoring work and the structure of the proposed longitudinal study.
Resumo:
This paper examines the life cycle GHG emissions from existing UK pulverized coal power plants. The life cycle of the electricity Generation plant includes construction, operation and decommissioning. The operation phase is extended to upstream and downstream processes. Upstream processes include the mining and transport of coal including methane leakage and the production and transport of limestone and ammonia, which are necessary for flue gas clean up. Downstream processes, on the other hand, include waste disposal and the recovery of land used for surface mining. The methodology used is material based process analysis that allows calculation of the total emissions for each process involved. A simple model for predicting the energy and material requirements of the power plant is developed. Preliminary calculations reveal that for a typical UK coal fired plant, the life cycle emissions amount to 990 g CO2-e/kWh of electricity generated, which compares well with previous UK studies. The majority of these emissions result from direct fuel combustion (882 g/kWh 89%) with methane leakage from mining operations accounting for 60% of indirect emissions. In total, mining operations (including methane leakage) account for 67.4% of indirect emissions, while limestone and other material production and transport account for 31.5%. The methodology developed is also applied to a typical IGCC power plant. It is found that IGCC life cycle emissions are 15% less than those from PC power plants. Furthermore, upon investigating the influence of power plant parameters on life cycle emissions, it is determined that, while the effect of changing the load factor is negligible, increasing efficiency from 35% to 38% can reduce emissions by 7.6%. The current study is funded by the UK National Environment Research Council (NERC) and is undertaken as part of the UK Carbon Capture and Storage Consortium (UKCCSC). Future work will investigate the life cycle emissions from other power generation technologies with and without carbon capture and storage. The current paper reveals that it might be possible that, when CCS is employed. the emissions during generation decrease to a level where the emissions from upstream processes (i.e. coal production and transport) become dominant, and so, the life cycle efficiency of the CCS system can be significantly reduced. The location of coal, coal composition and mining method are important in determining the overall impacts. In addition to studying the net emissions from CCS systems, future work will also investigate the feasibility and technoeconomics of these systems as a means of carbon abatement.
Resumo:
Interdisciplinary research presents particular challenges for unambiguous communication. Frequently, the meanings of words differ markedly between disciplines, leading to apparent consensus masking fundamental misunderstandings. Researchers can agree on the need for models, but conceive of models fundamentally differently. While mathematics is frequently seen as an elitist language reinforcing disciplinary distinctions, both mathematics and modelling can also offer scope to bridge disciplinary epistemological divisions and create common ground on which very different disciplines can meet. This paper reflects on the role and scope for mathematics and modelling to present a common epistemological space in interdisciplinary research spanning the social, natural and engineering sciences.