440 resultados para Extensive reading


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tendering is one of the stages in construction procurement that requires extensive information and documents exchange. However, tender documents are not always clear in practice. The aim of this study was to ascertain the clarity and adequacy of tender documents used in practice. Access was negotiated into two UK construction firms and the whole tender process for two projects was shadowed for 6-7 weeks in each firm using an ethnographic approach. A significant amount of tender queries, amendments and addenda were recorded. This showed that quality of tender documentation is still a problem in construction despite the existence of standards like Co-ordinated Project Information (1987) and British Standard 1192 (1984 and 1990) that are meant to help in producing clear and consistent project information. Poor quality tender documents are a source of inaccurate estimates, claims and disputes on contracts. Six recommendations are presented to help in improving the quality of tender documentation. Further research is needed into the recommendations to help improve the quality of tender documents, perhaps in conjunction with an industry-wide investigation into the level of incorporation of CPI principles in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The River Lugg has particular problems with high sediment loads that have resulted in detrimental impacts on ecology and fisheries. A new dynamic, process-based model of hydrology and sediments (INCA- SED) has been developed and applied to the River Lugg system using an extensive data set from 1995–2008. The model simulates sediment sources and sinks throughout the catchment and gives a good representation of the sediment response at 22 reaches along the River Lugg. A key question considered in using the model is the management of sediment sources so that concentrations and bed loads can be reduced in the river system. Altogether, five sediment management scenarios were selected for testing on the River Lugg, including land use change, contour tillage, hedging and buffer strips. Running the model with parameters altered to simulate these five scenarios produced some interesting results. All scenarios achieved some reduction in sediment levels, with the 40% land use change achieving the best result with a 19% reduction. The other scenarios also achieved significant reductions of between 7% and 9%. Buffer strips produce the best result at close to 9%. The results suggest that if hedge introduction, contour tillage and buffer strips were all applied, sediment reductions would total 24%, considerably improving the current sediment situation. We present a novel cost-effectiveness analysis of our results where we use percentage of land removed from production as our cost function. Given the minimal loss of land associated with contour tillage, hedges and buffer strips, we suggest that these management practices are the most cost-effective combination to reduce sediment loads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cloud-air transition zone at stratiform cloud edges is an electrically active region where droplet charging has been predicted. Cloud edge droplet charging is expected from vertical flow of cosmic ray generated atmospheric ions in the global electric circuit. Experimental confirmation of stratiform cloud edge electrification is presented here, through charge and droplet measurements made within an extensive layer of supercooled stratiform cloud, using a specially designed electrostatic sensor. Negative space charge up to 35 pC m−3 was found in a thin (<100 m) layer at the lower cloud boundary associated with the clear air-cloud conductivity gradient, agreeing closely with space charge predicted from the measured droplet concentration using ion-aerosol theory. Such charge levels carried by droplets are sufficient to influence collision processes between cloud droplets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the extensive use of polymers in the modern age with applications ranging from aerospace components to microcircuitry, the ability to regain the mechanical and physical characteristics of complex pristine materials after damage is an attractive proposition. This tutorial review focusses upon the key chemical concepts that have been successfully utilised in the design of healable polymeric materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2003, through a conference presentation in Vancouver and a series of exchanges with Lemon, Leonidas convinced Adobe to substantially extend the coverage of the Greek script in forthcoming Adobe typefaces. The revised brief for Garamond was extended to include, for the first time in a digital typeface, extensive polytonic support, full archaic characters, and small capitals with optional polytonic diacritics; these features should be implemented with respect for the Greek language’s complex rules for case conversion, allowing full dictionary support regardless of the features applied. This project was the first where these issues were addressed, both from a documentation and a development point of view. Leonidas’ responsibilities lay with researching historical and current conventions, developing specifications for the appearance and behaviour of the typefaces, editing glyph outlines, and testing of development versions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Temporal discounting (TD) matures with age, alongside other markers of increased impulse control, and coherent, self-regulated behaviour. Discounting paradigms quantify the ability to refrain from preference of immediate rewards, in favour of delayed, larger rewards. As such, they measure temporal foresight and the ability to delay gratification, functions that develop slowly into adulthood. We investigated the neural maturation that accompanies the previously observed age-related behavioural changes in discounting, from early adolescence into mid-adulthood. We used functional magnetic resonance imaging of a hypothetical discounting task with monetary rewards delayed in the week to year range. We show that age-related reductions in choice impulsivity were associated with changes in activation in ventromedial prefrontal cortex (vmPFC), anterior cingulate cortex (ACC), ventral striatum (VS), insula, inferior temporal gyrus, and posterior parietal cortex. Limbic frontostriatal activation changes were specifically associated with age-dependent reductions in impulsive choice, as part of a more extensive network of brain areas showing age-related changes in activation, including dorsolateral PFC, inferior parietal cortex, and subcortical areas. The maturational pattern of functional connectivity included strengthening in activation coupling between ventromedial and dorsolateral PFC, parietal and insular cortices during selection of delayed alternatives, and between vmPFC and VS during selection of immediate alternatives. We conclude that maturational mechanisms within limbic frontostriatal circuitry underlie the observed post-pubertal reductions in impulsive choice with increasing age, and that this effect is dependent on increased activation coherence within a network of areas associated with discounting behaviour and inter-temporal decision-making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an extensive thermodynamic analysis of a hysteresis experiment performed on a simplified yet Earth-like climate model. We slowly vary the solar constant by 20% around the present value and detect that for a large range of values of the solar constant the realization of snowball or of regular climate conditions depends on the history of the system. Using recent results on the global climate thermodynamics, we show that the two regimes feature radically different properties. The efficiency of the climate machine monotonically increases with decreasing solar constant in present climate conditions, whereas the opposite takes place in snowball conditions. Instead, entropy production is monotonically increasing with the solar constant in both branches of climate conditions, and its value is about four times larger in the warm branch than in the corresponding cold state. Finally, the degree of irreversibility of the system, measured as the fraction of excess entropy production due to irreversible heat transport processes, is much higher in the warm climate conditions, with an explosive growth in the upper range of the considered values of solar constants. Whereas in the cold climate regime a dominating role is played by changes in the meridional albedo contrast, in the warm climate regime changes in the intensity of latent heat fluxes are crucial for determining the observed properties. This substantiates the importance of addressing correctly the variations of the hydrological cycle in a changing climate. An interpretation of the climate transitions at the tipping points based upon macro-scale thermodynamic properties is also proposed. Our results support the adoption of a new generation of diagnostic tools based on the second law of thermodynamics for auditing climate models and outline a set of parametrizations to be used in conceptual and intermediate-complexity models or for the reconstruction of the past climate conditions. Copyright © 2010 Royal Meteorological Society

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have conducted the first extensive field test of two new methods to retrieve optical properties for overhead clouds that range from patchy to overcast. The methods use measurements of zenith radiance at 673 and 870 nm wavelengths and require the presence of green vegetation in the surrounding area. The test was conducted at the Atmospheric Radiation Measurement Program Oklahoma site during September–November 2004. These methods work because at 673 nm (red) and 870 nm (near infrared (NIR)), clouds have nearly identical optical properties, while vegetated surfaces reflect quite differently. The first method, dubbed REDvsNIR, retrieves not only cloud optical depth τ but also radiative cloud fraction. Because of the 1-s time resolution of our radiance measurements, we are able for the first time to capture changes in cloud optical properties at the natural timescale of cloud evolution. We compared values of τ retrieved by REDvsNIR to those retrieved from downward shortwave fluxes and from microwave brightness temperatures. The flux method generally underestimates τ relative to the REDvsNIR method. Even for overcast but inhomogeneous clouds, differences between REDvsNIR and the flux method can be as large as 50%. In addition, REDvsNIR agreed to better than 15% with the microwave method for both overcast and broken clouds. The second method, dubbed COUPLED, retrieves τ by combining zenith radiances with fluxes. While extra information from fluxes was expected to improve retrievals, this is not always the case. In general, however, the COUPLED and REDvsNIR methods retrieve τ to within 15% of each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the potential benefits to human health, there is interest in developing sustainable nutritional strategies to enhance the concentration of long-chain n-3 fatty acids in ruminant-derived foods. Four Aberdeen Angus steers fitted with rumen and duodenal cannulae were used in a 4 × 4 Latin square experiment with 21 d experimental periods to examine the potential of fish oil (FO) in the diet to enhance the supply of 20 : 5n-3 and 22 : 6n-3 available for absorption in growing cattle. Treatments consisted of total mixed rations based on maize silage fed at a rate of 85 g DM/kg live weight0·75/d containing 0, 8, 16 and 24 g FO/kg diet DM. Supplements of FO reduced linearly (P < 0·01) DM intake and shifted (P < 0·01) rumen fermentation towards propionate at the expense of acetate and butyrate. FO in the diet enhanced linearly (P < 0·05) the flow of trans-16 : 1, trans-18 : 1, trans-18 : 2, 20 : 5n-3 and 22 : 6n-3, and decreased linearly (P < 0·05) 18 : 0 and 18 : 3n-3 at the duodenum. Increases in the flow of trans-18 : 1 were isomer dependent and were determined primarily by higher amounts of trans-11 reaching the duodenum. In conclusion, FO alters ruminal lipid metabolism of growing cattle in a dose-dependent manner consistent with an inhibition of ruminal biohydrogenation, and enhances the amount of long-chain n-3 fatty acids at the duodenum, but the increases are marginal due to extensive biohydrogenation in the rumen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collected papers of the University of Reading Stenton Symposium, 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dinuclear complex [(tpy)Ru-II(PCP-PCP)Ru-II(tPY)]Cl-2 (bridging PCP-PCP = 3,3',5,5'-tetrakis(diphenylphosphinomethyl)biphenyl, [C6H2(CH2PPh2)(2)-3,5](2)(2-)) was prepared via a transcyclometalation reaction of the bis-pincer ligand [PC(H)P-PC(H)P] and the Ru(II) precursor [Ru(NCN)(tpy)]Cl (NCN = [C6H3(CH2NMe2)(2)-2,6](-)) followed by a reaction with 2,2':6',2 ''-terpyridine (tpy). Electrochemical and spectroscopic properties of [(tpy)Ru-II(PCP-PCP)Ru-II(tPY)]Cl-2 are compared with those of the closely related [(tpy)Ru-II(NCN-NCN)Ru-II(tpy)](PF6)(2) (NCN-NCN = [C6H2(CH2- NMe2)(2)-3,5](2)(2-)) obtained by two-electron reduction of [(tpy)Ru-III(NCN-NCN)Ru-III(tpy)](PF6)(4). The molecular structure of the latter complex has been determined by single-crystal X-ray structure determination. One-electron reduction of [(tpy)Ru-III(NCN-NCN)Ru-III(tpy)](PF6)(4) and one-electron oxidation of [(tpy)Ru-II(PCP-PCP)RUII(tpy)]Cl-2 yielded the mixed-valence species [(tpy)Ru-III(NCN-NCN)RUII(tpy)](3+) and [(tpy)Ru-III(PCP-PCP)RUII(tpy)](3+), respectively. The comproportionation equilibrium constants K-c (900 and 748 for [(tpy)Ru-III(NCN-NCN)Ru-III(tpy)](4+) and [(tpy)Ru-II(PCP-PCP)RUII(tpy)](2+), respectively) determined from cyclic voltammetric data reveal comparable stability of the [Ru-III-Ru-II] state of both complexes. Spectroelectrochemical measurements and near-infrared (NIR) spectroscopy were employed to further characterize the different redox states with special focus on the mixed-valence species and their NIR bands. Analysis of these bands in the framework of Hush theory indicates that the mixed-valence complexes [(tpy)Ru-III(PCP-PCP)RUII(tpy)](3+) and [(tpy)Ru-III(NCN-NCN)RUII(tpy)](3+) belong to strongly coupled borderline Class II/Class III and intrinsically coupled Class III systems, respectively. Preliminary DFT calculations suggest that extensive delocalization of the spin density over the metal centers and the bridging ligand exists. TD-DFT calculations then suggested a substantial MLCT character of the NIR electronic transitions. The results obtained in this study point to a decreased metal-metal electronic interaction accommodated by the double-cyclometalated bis-pincer bridge when strong sigma-donor NMe2 groups are replaced by weak sigma-donor, pi-acceptor PPh2 groups

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.