23 resultados para DIFFERENT ADHESIVE SYSTEMS
em CentAUR: Central Archive University of Reading - UK
Determination of digesta flow entering the omasal canal of dairy cows using different marker systems
Resumo:
Four studies were conducted to compare the effect of four indigestible markers (LiCoEDTA, Yb-acetate, Cr-mordanted straw and indigestible neutral-detergent fibre (INDF)) and three marker systems on the flow of digesta entering the omasal canal of lactating dairy cows. Samples of digesta aspirated from the omasal canal were pooled and separated using filtration and high-speed centrifugation into three fractions defined as the liquid phase, small particulate and large particulate matter. Co was primarily associated with the liquid phase, Yb was concentrated in small particulate matter, whilst Cr and INDF were associated with large particles. Digesta flow was calculated based on single markers or using the reconstitution system based on combinations of two (Co + Yb, Co + Cr and Co + INDF) or three markers (Co + Yb + Cr and Co + Yb + INDF). Use of single markers resulted in large differences between estimates of organic matter (OM) flow entering the omasal canal suggesting that samples were not representative of true digesta. Digesta appeared to consist of at least three phases that tended to separate during sampling. OM was concentrated in particulate matter, whilst the liquid phase consisted mainly of volatile fatty acids and inorganic matter. Yb was intimately associated with nitrogenous compounds, whereas Cr and INDF were concentrated in fibrous material. Current data indicated that marker systems based on Yb in combination with Cr or INDF are required for the accurate determination of OM, N and neutral-detergent fibre flow. In cases where the flow of water-soluble nutrients entering the omasal canal is also required, the marker system should also include Co.
Resumo:
The objective of this article is to review the scientific literature on airflow distribution systems and ventilation effectiveness to identify and assess the most suitable room air distribution methods for various spaces. In this study, different ventilation systems are classified according to specific requirements and assessment procedures. This study shows that eight ventilation methods have been employed in the built environment for different purposes and tasks. The investigation shows that numerous studies have been carried out on ventilation effectiveness but few studies have been done regarding other aspects of air distribution. Amongst existing types of ventilation systems, the performance of each ventilation methods varies from one case to another due to different usages of the ventilation system in a room and the different assessment indices used. This review shows that the assessment of ventilation effectiveness or efficiency should be determined according to each task of the ventilation system, such as removal of heat, removal of pollutant, supply fresh air to the breathing zone or protecting the occupant from cross infection. The analysis results form a basic framework regarding the application of airflow distribution for the benefit of designers, architects, engineers, installers and building owners.
Resumo:
A field plot experiment was set up on a sandy loam soil of SW England in order to determine the efficiency of nitrogen use from different cattle manures. The manure treatments were low and high dry matter cattle slurries and one farmyard manure applied at a target rate of 200 kg total Nha(-1) year(-1), and an untreated control. There were three different cropping systems: ryegrass/clover mixture, maize/rye and maize/bare soil, which were evaluated during 1998/99 and 1999/00. Measurements were made of N losses, N uptake and herbage DM yields. Result showed that manure type had a significant effect on N utilisation only for maize. N balances were negative in maize (approximately -247 to -10 kg N) compared to grass (approximately 5-158 kg N). Agronomic management was more important than manure type in influencing N losses, where soil cultivation appeared to be a key factor when comparing maize and grass systems. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This article is a commentary on several research studies conducted on the prospects for aerobic rice production systems that aim at reducing the demand for irrigation water which in certain major rice producing areas of the world is becoming increasingly scarce. The research studies considered, as reported in published articles mainly under the aegis of the International Rice Research Institute (IRRI), have a narrow scope in that they test only 3 or 4 rice varieties under different soil moisture treatments obtained with controlled irrigation, but with other agronomic factors of production held as constant. Consequently, these studies do not permit an assessment of the interactions among agronomic factors that will be of critical significance to the performance of any production system. Varying the production factor of "water" will seriously affect also the levels of the other factors required to optimise the performance of a production system. The major weakness in the studies analysed in this article originates from not taking account of the interactions between experimental and non-experimental factors involved in the comparisons between different production systems. This applies to the experimental field design used for the research studies as well as to the subsequent statistical analyses of the results. The existence of such interactions is a serious complicating element that makes meaningful comparisons between different crop production systems difficult. Consequently, the data and conclusions drawn from such research readily become biased towards proposing standardised solutions for possible introduction to farmers through a linear technology transfer process. Yet, the variability and diversity encountered in the real-world farming environment demand more flexible solutions and approaches in the dissemination of knowledge-intensive production practices through "experiential learning" types of processes, such as those employed by farmer field schools. This article illustrates, based on expertise of the 'system of rice intensification' (SRI), that several cost-effective and environment-friendly agronomic solutions to reduce the demand for irrigation water, other than the asserted need for the introduction of new cultivars, are feasible. Further, these agronomic Solutions can offer immediate benefits of reduced water requirements and increased net returns that Would be readily accessible to a wide range of rice producers, particularly the resource poor smallholders. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We have investigated the signalling properties of the chemokine receptor, CCR5, using several assays for agonism: stimulation of changes in intracellular Ca2+ or CCR5 internalisation in CHO cells expressing CCR5 or stimulation of [S-35]GTP gamma S binding in membranes of CHO cells expressing CCR5. Four isoforms of the chemokine CCL3 with different amino termini (CCL3, CCL3(2-70), CCL3(5-70), CCL3L1) were tested in these assays in order to probe structure/activity relationships. Each isoform exhibited agonism. The pattern of agonism (potency, maximal effect) was different in the three assays, although the rank order was the same with CCL3L1 being the most potent and efficacious. The data show that the amino terminus of the chemokine is important for signalling. A proline at position 2 (CCL3L1) provides for high potency and efficacy but the isoform with a serine at position 2 (CCL3(2-70)) is as efficacious in some assays showing that the proline is not the only determinant of high efficacy. We also increased the sensitivity of CCR5 signalling by treating cells with sodium butyrate, thus increasing the receptor/G protein ratio. This allowed the detection of a change in intracellular Ca2+ after treatment with CCL7 and Met-RANTES showing that these ligands possess measurable but low efficacy. This study therefore shows that sodium butyrate treatment increases the sensitivity of signalling assays and enables the detection of efficacy in ligands previously considered as antagonists. The use of different assay systems, therefore, provides different estimates of efficacy for some ligands at this receptor. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
This paper deals with the energy consumption and the evaluation of the performance of air supply systems for a ventilated room involving high- and low-level supplies. The energy performance assessment is based on the airflow rate, which is related to the fan power consumption by achieving the same environmental quality performance for each case. Four different ventilation systems are considered: wall displacement ventilation, confluent jets ventilation, impinging jet ventilation and a high level mixing ventilation system. The ventilation performance of these systems will be examined by means of achieving the same Air Distribution Index (ADI) for different cases. The widely used high-level supplies require much more fan power than those for low-level supplies for achieving the same value of ADI. In addition, the supply velocity, hence the supply dynamic pressure, for a high-level supply is much larger than for low-level supplies. This further increases the power consumption for high-level supply systems. The paper considers these factors and attempts to provide some guidelines on the difference in the energy consumption associated with high and low level air supply systems. This will be useful information for designers and to the authors' knowledge there is a lack of information available in the literature on this area of room air distribution. The energy performance of the above-mentioned ventilation systems has been evaluated on the basis of the fan power consumed which is related to the airflow rate required to provide equivalent indoor environment. The Air Distribution Index (ADI) is used to evaluate the indoor environment produced in the room by the ventilation strategy being used. The results reveal that mixing ventilation requires the highest fan power and the confluent jets ventilation needs the lowest fan power in order to achieve nearly the same value of ADI.
Resumo:
We have investigated the signalling properties of the chemokine receptor, CCR5, using several assays for agonism: stimulation of changes in intracellular Ca(2+) or CCR5 internalisation in CHO cells expressing CCR5 or stimulation of [(35)S]GTPgammaS binding in membranes of CHO cells expressing CCR5. Four isoforms of the chemokine CCL3 with different amino termini (CCL3, CCL3(2-70), CCL3(5-70), CCL3L1) were tested in these assays in order to probe structure/activity relationships. Each isoform exhibited agonism. The pattern of agonism (potency, maximal effect) was different in the three assays, although the rank order was the same with CCL3L1 being the most potent and efficacious. The data show that the amino terminus of the chemokine is important for signalling. A proline at position 2 (CCL3L1) provides for high potency and efficacy but the isoform with a serine at position 2 (CCL3(2-70)) is as efficacious in some assays showing that the proline is not the only determinant of high efficacy. We also increased the sensitivity of CCR5 signalling by treating cells with sodium butyrate, thus increasing the receptor/G protein ratio. This allowed the detection of a change in intracellular Ca(2+) after treatment with CCL7 and Met-RANTES showing that these ligands possess measurable but low efficacy. This study therefore shows that sodium butyrate treatment increases the sensitivity of signalling assays and enables the detection of efficacy in ligands previously considered as antagonists. The use of different assay systems, therefore, provides different estimates of efficacy for some ligands at this receptor.
Resumo:
Most of studies on interoperability of systems integration focus on technical and semantic levels, but hardly extend investigations on pragmatic level. Our past work has addressed pragmatic interoperability, which is concerned with the relationship between signs and the potential behaviour and intention of responsible agents. We also define the pragmatic interoperability as a level concerning with the aggregation and optimisation of various business processes for achieving intended purposes of different information systems. This paper, as the extension of our previous research, is to propose an assessment method for measuring pragmatic interoperability of information systems. We firstly propose interoperability analysis framework, which is based on the concept of semiosis. We then develop pragmatic interoperability assessment process from two dimensions including six aspects (informal, formal, technical, substantive, communication, and control). We finally illustrate the assessment process in an example.
Resumo:
This paper explores the criticism that system dynamics is a ‘hard’ or ‘deterministic’ systems approach. This criticism is seen to have four interpretations and each is addressed from the perspectives of social theory and systems science. Firstly, system dynamics is shown to offer not prophecies but Popperian predictions. Secondly, it is shown to involve the view that system structure only partially, not fully, determines human behaviour. Thirdly, the field's assumptions are shown not to constitute a grand content theory—though its structural theory and its attachment to the notion of causality in social systems are acknowledged. Finally, system dynamics is shown to be significantly different from systems engineering. The paper concludes that such confusions have arisen partially because of limited communication at the theoretical level from within the system dynamics community but also because of imperfect command of the available literature on the part of external commentators. Improved communication on theoretical issues is encouraged, though it is observed that system dynamics will continue to justify its assumptions primarily from the point of view of practical problem solving. The answer to the question in the paper's title is therefore: on balance, no.
Resumo:
The impact of selected observing systems on the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA40) is explored by mimicking observational networks of the past. This is accomplished by systematically removing observations from the present observational data base used by ERA40. The observing systems considered are a surface-based system typical of the period prior to 1945/50, obtained by only retaining the surface observations, a terrestrial-based system typical of the period 1950-1979, obtained by removing all space-based observations, and finally a space-based system, obtained by removing all terrestrial observations except those for surface pressure. Experiments using these different observing systems have been limited to seasonal periods selected from the last 10 yr of ERA40. The results show that the surface-based system has severe limitations in reconstructing the atmospheric state of the upper troposphere and stratosphere. The terrestrial system has major limitations in generating the circulation of the Southern Hemisphere with considerable errors in the position and intensity of individual weather systems. The space-based system is able to analyse the larger-scale aspects of the global atmosphere almost as well as the present observing system but performs less well in analysing the smaller-scale aspects as represented by the vorticity field. Here, terrestrial data such as radiosondes and aircraft observations are of paramount importance. The terrestrial system in the form of a limited number of radiosondes in the tropics is also required to analyse the quasi-biennial oscillation phenomenon in a proper way. The results also show the dominance of the satellite observing system in the Southern Hemisphere. These results all indicate that care is required in using current reanalyses in climate studies due to the large inhomogeneity of the available observations, in particular in time.
Resumo:
A new 'storm-tracking approach' to analysing the prediction of storms by different forecast systems has recently been developed. This paper provides a brief illustration of the type of results/information that can be obtained using the approach. It also describes in detail how eScience methodologies have been used to help apply the storm-tracking approach to very large datasets
Resumo:
This paper aims to summarise the current performance of ozone data assimilation (DA) systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP) systems based on general circulation models (GCMs); the other five use chemistry transport models (CTMs). The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion) or the system (CTM or NWP system). Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa), some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry. These results indicate that when good observations are available for assimilation, the first priority for improving ozone DA systems is to improve the models. The analyses benefit strongly from the good quality of the MIPAS ozone observations. Using the analyses as a transfer standard, it is seen that MIPAS is similar to 5% higher than HALOE (Halogen Occultation Experiment) in the mid and upper stratosphere and mesosphere (above 30 hPa), and of order 10% higher than ozonesonde and HALOE in the lower stratosphere (100 hPa to 30 hPa). Analyses based on SCIAMACHY total column are almost as good as the MIPAS analyses; analyses based on SCIAMACHY limb profiles are worse in some areas, due to problems in the SCIAMACHY retrievals.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
Smallholdings in the rural areas of northwest Syria are a result of land fragmentation that is due to inheritance. Because of rapid population growth combined with land fragmentation, these smallholdings are increasing and cannot sustain the rural households whose sizes and needs are also increasing rapidly. This situation has led to increasing numbers of mates migrating to urban areas in Syria and to neighbouring countries looking for work opportunities. In addition, recent agricultural intensification trends seem to have led to the emergence of a waged labour force which, in the absence of male workers owing to significant rates of migration, is now predominantly female. Agricultural labour use depends upon household characteristics and resources (type of labour used, gender of labour waged/exchanged/familial). The article attempts to present a comprehensive analysis of household labour use in distinctive farming systems in one region of Syria that has undergone great change in recent decades, and examines the changes in the composition of the agricultural labour force. Secondary information, rapid rural appraisals and formal farm surveys were used to gather information on the households in a study area where different farming systems coexist. The results show that the decrease in landholding size, the resulting male migration, and land intensification have resulted in the expansion of female labour in agricultural production, which has been termed in this research a 'feminization of agricultural labour'. This suggests that agricultural research and extension services will have to work more with women farmers and farm workers, seek their wisdom and involve them in technology and tran, fer This is not easy in conservative societies but requires research and extension institutions to take this reality into consideration in their programmes.