38 resultados para Large amounts
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
Road transport and shipping are copious sources of aerosols, which exert a 9 significant radiative forcing, compared to, for example, the CO2 emitted by these sectors. An 10 advanced atmospheric general circulation model, coupled to a mixed-layer ocean, is used to 11 calculate the climate response to the direct radiative forcing from such aerosols. The cases 12 considered include imposed distributions of black carbon and sulphate aerosols from road 13 transport, and sulphate aerosols from shipping; these are compared to the climate response 14 due to CO2 increases. The difficulties in calculating the climate response due to small 15 forcings are discussed, as the actual forcings have to be scaled by large amounts to enable a 16 climate response to be easily detected. Despite the much greater geographical inhomogeneity 17 in the sulphate forcing, the patterns of zonal and annual-mean surface temperature response 18 (although opposite in sign) closely resembles that resulting from homogeneous changes in 19 CO2. The surface temperature response to black carbon aerosols from road transport is shown 20 to be notably non-linear in scaling applied, probably due to the semi-direct response of clouds 21 to these aerosols. For the aerosol forcings considered here, the most widespread method of 22 calculating radiative forcing significantly overestimates their effect, relative to CO2, 23 compared to surface temperature changes calculated using the climate model.
Resumo:
The National Grid Company plc. owns and operates the electricity transmission network in England and Wales, the day to day running of the network being carried out by teams of engineers within the national control room. The task of monitoring and operating the transmission network involves the transfer of large amounts of data and a high degree of cooperation between these engineers. The purpose of the research detailed in this paper is to investigate the use of interfacing techniques within the control room scenario, in particular, the development of an agent based architecture for the support of cooperative tasks. The proposed architecture revolves around the use of interface and user supervisor agents. Primarily, these agents are responsible for the flow of information to and from individual users and user groups. The agents are also responsible for tackling the synchronisation and control issues arising during the completion of cooperative tasks. In this paper a novel approach to human computer interaction (HCI) for power systems incorporating an embedded agent infrastructure is presented. The agent architectures used to form the base of the cooperative task support system are discussed, as is the nature of the support system and tasks it is intended to support.
Resumo:
Bacterioferritin (BFR) from Escherichia coli is a member of the ferritin family of iron storage proteins and has the capacity to store very large amounts of iron as an Fe(3+) mineral inside its central cavity. The ability of organisms to tap into their cellular stores in times of iron deprivation requires that iron must be released from ferritin mineral stores. Currently, relatively little is known about the mechanisms by which this occurs, particularly in prokaryotic ferritins. Here we show that the bis-Met-coordinated heme groups of E. coli BFR, which are not found in other members of the ferritin family, play an important role in iron release from the BFR iron biomineral: kinetic iron release experiments revealed that the transfer of electrons into the internal cavity is the rate-limiting step of the release reaction and that the rate and extent of iron release were significantly increased in the presence of heme. Despite previous reports that a high affinity Fe(2+) chelator is required for iron release, we show that a large proportion of BFR core iron is released in the absence of such a chelator and further that chelators are not passive participants in iron release reactions. Finally, we show that the catalytic ferroxidase center, which is central to the mechanism of mineralization, is not involved in iron release; thus, core mineralization and release processes utilize distinct pathways.
Resumo:
Nitrogen flows from European watersheds to coastal marine waters Executive summary Nature of the problem • Most regional watersheds in Europe constitute managed human territories importing large amounts of new reactive nitrogen. • As a consequence, groundwater, surface freshwater and coastal seawater are undergoing severe nitrogen contamination and/or eutrophication problems. Approaches • A comprehensive evaluation of net anthropogenic inputs of reactive nitrogen (NANI) through atmospheric deposition, crop N fixation,fertiliser use and import of food and feed has been carried out for all European watersheds. A database on N, P and Si fluxes delivered at the basin outlets has been assembled. • A number of modelling approaches based on either statistical regression analysis or mechanistic description of the processes involved in nitrogen transfer and transformations have been developed for relating N inputs to watersheds to outputs into coastal marine ecosystems. Key findings/state of knowledge • Throughout Europe, NANI represents 3700 kgN/km2/yr (range, 0–8400 depending on the watershed), i.e. five times the background rate of natural N2 fixation. • A mean of approximately 78% of NANI does not reach the basin outlet, but instead is stored (in soils, sediments or ground water) or eliminated to the atmosphere as reactive N forms or as N2. • N delivery to the European marine coastal zone totals 810 kgN/km2/yr (range, 200–4000 depending on the watershed), about four times the natural background. In areas of limited availability of silica, these inputs cause harmful algal blooms. Major uncertainties/challenges • The exact dimension of anthropogenic N inputs to watersheds is still imperfectly known and requires pursuing monitoring programmes and data integration at the international level. • The exact nature of ‘retention’ processes, which potentially represent a major management lever for reducing N contamination of water resources, is still poorly understood. • Coastal marine eutrophication depends to a large degree on local morphological and hydrographic conditions as well as on estuarine processes, which are also imperfectly known. Recommendations • Better control and management of the nitrogen cascade at the watershed scale is required to reduce N contamination of ground- and surface water, as well as coastal eutrophication. • In spite of the potential of these management measures, there is no choice at the European scale but to reduce the primary inputs of reactive nitrogen to watersheds, through changes in agriculture, human diet and other N flows related to human activity.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
Distributed and collaborative data stream mining in a mobile computing environment is referred to as Pocket Data Mining PDM. Large amounts of available data streams to which smart phones can subscribe to or sense, coupled with the increasing computational power of handheld devices motivates the development of PDM as a decision making system. This emerging area of study has shown to be feasible in an earlier study using technological enablers of mobile software agents and stream mining techniques [1]. A typical PDM process would start by having mobile agents roam the network to discover relevant data streams and resources. Then other (mobile) agents encapsulating stream mining techniques visit the relevant nodes in the network in order to build evolving data mining models. Finally, a third type of mobile agents roam the network consulting the mining agents for a final collaborative decision, when required by one or more users. In this paper, we propose the use of distributed Hoeffding trees and Naive Bayes classifers in the PDM framework over vertically partitioned data streams. Mobile policing, health monitoring and stock market analysis are among the possible applications of PDM. An extensive experimental study is reported showing the effectiveness of the collaborative data mining with the two classifers.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
Property ownership can tie up large amounts of capital and management energy that business could employ more productively elsewhere. Competitive pressures, accounting changes and increasingly sophisticated occupier requirements are building demand for new and innovative ways to satisfy corporate occupation needs. The investment climate is also changing. Falling interest rates and falling inflation can be expected to undermine returns from the traditional FRI lease. In future, investment returns will be more dependent on active and innovative management geared to the needs of occupiers on whom income depends. Occupier and investor interests, therefore, look set to coincide, but unlocking the potential for both parties will depend on developing new finance and investment vehicles that align their respective needs. In the UK, examples include PFI in the public sector and off-balance sheet financing in the private sector. In the USA, “synthetic lease” structures have also become popular. Growing investment market experience in assessing risks and returns suggests scope for further innovative arrangements in the corporate sector. But how can such arrangements be structured? What are the risks, drivers and barriers?
Resumo:
During the cold period of the Last Glacial Maximum (LGM, about 21 000 years ago) atmospheric CO2 was around 190 ppm, much lower than the pre-industrial concentration of 280 ppm. The causes of this substantial drop remain partially unresolved, despite intense research. Understanding the origin of reduced atmospheric CO2 during glacial times is crucial to comprehend the evolution of the different carbon reservoirs within the Earth system (atmosphere, terrestrial biosphere and ocean). In this context, the ocean is believed to play a major role as it can store large amounts of carbon, especially in the abyss, which is a carbon reservoir that is thought to have expanded during glacial times. To create this larger reservoir, one possible mechanism is to produce very dense glacial waters, thereby stratifying the deep ocean and reducing the carbon exchange between the deep and upper ocean. The existence of such very dense waters has been inferred in the LGM deep Atlantic from sediment pore water salinity and δ18O inferred temperature. Based on these observations, we study the impact of a brine mechanism on the glacial carbon cycle. This mechanism relies on the formation and rapid sinking of brines, very salty water released during sea ice formation, which brings salty dense water down to the bottom of the ocean. It provides two major features: a direct link from the surface to the deep ocean along with an efficient way of setting a strong stratification. We show with the CLIMBER-2 carbon-climate model that such a brine mechanism can account for a significant decrease in atmospheric CO2 and contribute to the glacial-interglacial change. This mechanism can be amplified by low vertical diffusion resulting from the brine-induced stratification. The modeled glacial distribution of oceanic δ13C as well as the deep ocean salinity are substantially improved and better agree with reconstructions from sediment cores, suggesting that such a mechanism could have played an important role during glacial times.
Resumo:
It is often necessary to selectively attend to important information, at the expense of less important information, especially if you know you cannot remember large amounts of information. The present study examined how younger and older adults select valuable information to study, when given unrestricted choices about how to allocate study time. Participants were shown a display of point values ranging from 1–30. Participants could choose which values to study, and the associated word was then shown. Study time, and the choice to restudy words, was under the participant's control during the 2-minute study session. Overall, both age groups selected high value words to study and studied these more than the lower value words. However, older adults allocated a disproportionately greater amount of study time to the higher-value words, and age-differences in recall were reduced or eliminated for the highest value words. In addition, older adults capitalized on recency effects in a strategic manner, by studying high-value items often but also immediately before the test. A multilevel mediation analysis indicated that participants strategically remembered items with higher point value, and older adults showed similar or even stronger strategic process that may help to compensate for poorer memory. These results demonstrate efficient (and different) metacognitive control operations in younger and older adults, which can allow for strategic regulation of study choices and allocation of study time when remembering important information. The findings are interpreted in terms of life span models of agenda-based regulation and discussed in terms of practical applications. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract)
Resumo:
Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.
Resumo:
Analysis of microbial gene expression during host colonization provides valuable information on the nature of interaction, beneficial or pathogenic, and the adaptive processes involved. Isolation of bacterial mRNA for in planta analysis can be challenging where host nucleic acid may dominate the preparation, or inhibitory compounds affect downstream analysis, e.g., quantitative reverse transcriptase PCR (qPCR), microarray, or RNA-seq. The goal of this work was to optimize the isolation of bacterial mRNA of food-borne pathogens from living plants. Reported methods for recovery of phytopathogen-infected plant material, using hot phenol extraction and high concentration of bacterial inoculation or large amounts of infected tissues, were found to be inappropriate for plant roots inoculated with Escherichia coli O157:H7. The bacterial RNA yields were too low and increased plant material resulted in a dominance of plant RNA in the sample. To improve the yield of bacterial RNA and reduce the number of plants required, an optimized method was developed which combines bead beating with directed bacterial lysis using SDS and lysozyme. Inhibitory plant compounds, such as phenolics and polysaccharides, were counteracted with the addition of high-molecular-weight polyethylene glycol and hexadecyltrimethyl ammonium bromide. The new method increased the total yield of bacterial mRNA substantially and allowed assessment of gene expression by qPCR. This method can be applied to other bacterial species associated with plant roots, and also in the wider context of food safety.
Resumo:
Scattering and absorption by aerosol in anthropogenically perturbed air masses over Europe has been measured using instrumentation flown on the UK’s BAe-146-301 large Atmospheric Research Aircraft (ARA) operated by the Facility for Airborne Atmospheric Measurements (FAAM) on 14 flights during the EUCAARI-LONGREX campaign in May 2008. The geographical and temporal variations of the derived shortwave optical properties of aerosol are presented. Values of single scattering albedo of dry aerosol at 550 nm varied considerably from 0.86 to near unity, with a campaign average of 0.93 ± 0.03. Dry aerosol optical depths ranged from 0.030 ± 0.009 to 0.24 ± 0.07. An optical properties closure study comparing calculations from composition data and Mie scattering code with the measured properties is presented. Agreement to within measurement uncertainties of 30% can be achieved for both scattering and absorption,but the latter is shown to be sensitive to the refractive indices chosen for organic aerosols, and to a lesser extent black carbon, as well as being highly dependent on the accuracy of the absorption measurements. Agreement with the measured absorption can be achieved either if organic carbon is assumed to be weakly absorbing, or if the organic aerosol is purely scattering and the absorption measurement is an overestimate due to the presence of large amounts of organic carbon. Refractive indices could not be inferred conclusively due to this uncertainty, despite the enhancement in methodology compared to previous studies that derived from the use of the black carbon measurements. Hygroscopic growth curves derived from the wet nephelometer indicate moderate water uptake by the aerosol with a campaign mean f (RH) value (ratio in scattering) of 1.5 (range from 1.23 to 1.63) at 80% relative humidity. This value is qualitatively consistent with the major chemical components of the aerosol measured by the aerosol mass spectrometer, which are primarily mixed organics and nitrate and some sulphate.