976 resultados para Push-out test
Resumo:
Ecological risk assessments must increasingly consider the effects of chemical mixtures on the environment as anthropogenic pollution continues to grow in complexity. Yet testing every possible mixture combination is impractical and unfeasible; thus, there is an urgent need for models that can accurately predict mixture toxicity from single-compound data. Currently, two models are frequently used to predict mixture toxicity from single-compound data: Concentration addition and independent action (IA). The accuracy of the predictions generated by these models is currently debated and needs to be resolved before their use in risk assessments can be fully justified. The present study addresses this issue by determining whether the IA model adequately described the toxicity of binary mixtures of five pesticides and other environmental contaminants (cadmium, chlorpyrifos, diuron, nickel, and prochloraz) each with dissimilar modes of action on the reproduction of the nematode Caenorhabditis elegans. In three out of 10 cases, the IA model failed to describe mixture toxicity adequately with significant or antagonism being observed. In a further three cases, there was an indication of synergy, antagonism, and effect-level-dependent deviations, respectively, but these were not statistically significant. The extent of the significant deviations that were found varied, but all were such that the predicted percentage effect seen on reproductive output would have been wrong by 18 to 35% (i.e., the effect concentration expected to cause a 50% effect led to an 85% effect). The presence of such a high number and variety of deviations has important implications for the use of existing mixture toxicity models for risk assessments, especially where all or part of the deviation is synergistic.
Resumo:
Anomalous heavy snow during winter or spring has long been regarded as a possible precursor of deficient Indian monsoon rainfall during the subsequent summer. However previous work in this field is inconclusive, in terms of the mechanism that communicates snow anomalies to the monsoon summer, and even the region from which snow has the most impact. In this study we explore these issues in coupled and atmosphere-only versions of the Hadley Centre model. A 1050-year control integration of the HadCM3 coupled model, which well represents the seasonal cycle of snow cover over the Eurasian continent, is analysed and shows evidence for weakened monsoons being preceded by strong snow forcing (in the absence of ENSO) over either the Himalaya/Tibetan Plateau or north/west Eurasia regions. However, empirical orthogonal function (EOF) analysis of springtime interannual variability in snow depth shows the leading mode to have opposite signs between these two regions, suggesting that competing mechanisms may be possible. To determine the dominant region, ensemble integrations are carried out using HadAM3, the atmospheric component of HadCM3, and a variety of anomalous snow forcing initial conditions obtained from the control integration of the coupled model. Forcings are applied during spring in separate experiments over the Himalaya/Tibetan Plateau and north/west Eurasia regions, in conjunction with climatological SSTs in order to avoid the direct effects of ENSO. With the aid of idealized forcing conditions in sensitivity tests, we demonstrate that forcing from the Himalaya region is dominant in this model via a Blanford-type mechanism involving reduced surface sensible heat and longwave fluxes, reduced heating of the troposphere over the Tibetan Plateau and consequently a reduced meridional tropospheric temperature gradient which weakens the monsoon during early summer. Snow albedo is shown to be key to the mechanism, explaining around 50% of the perturbation in sensible heating over the Tibetan Plateau, and accounting for the majority of cooling through the troposphere.
A refined LEED analysis of water on Ru{0001}: an experimental test of the partial dissociation model
Resumo:
Despite a number of earlier studies which seemed to confirm molecular adsorption of water on close-packed surfaces of late transition metals, new controversy has arisen over a recent theoretical work by Feibelman, according to which partial dissociation occurs on the Ru{0001} surface leading to a mixed (H2O + OH + H) superstructure. Here, we present a refined LEED-IV analysis of the (root3 x root3)R30degrees-D2O-Ru{0001} structure, testing explicitly this new model by Feibelman. Our results favour the model proposed earlier by Held and Menzel assuming intact water molecules with almost coplanar oxygen atoms and out-of-plane hydrogen atoms atop the slightly higher oxygen atoms. The partially dissociated model with an almost identical arrangement of oxygen atoms can, however, not unambiguously be excluded, especially when the single hydrogen atoms are not present in the surface unit cell. In contrast to the earlier LEED-IV analysis, we can, however, clearly exclude a buckled geometry of oxygen atoms.
Resumo:
The spatial distribution of CO2 level in a classroom carried out in previous field work research has demonstrated that there is some evidence of variations in CO2 concentration in a classroom space. Significant fluctuations in CO2 concentration were found at different sampling points depending on the ventilation strategies and environmental conditions prevailing in individual classrooms. However, how these variations are affected by the emitting sources and the room air movement remains unknown. Hence, it was concluded that detailed investigation of the CO2 distribution need to be performed on a smaller scale. As a result, it was decided to use an environmental chamber with various methods and rates of ventilation, for the same internal temperature and heat loads, to study the effect of ventilation strategy and air movement on the distribution of CO2 concentration in a room. The role of human exhalation and its interaction with the plume induced by the body's convective flow and room air movement due to different ventilation strategies were studied in a chamber at the University of Reading. These phenomena are considered to be important in understanding and predicting the flow patterns in a space and how these impact on the distribution of contaminants. This paper attempts to study the CO2 dispersion and distribution at the exhalation zone of two people sitting in a chamber as well as throughout the occupied zone of the chamber. The horizontal and vertical distributions of CO2 were sampled at locations with a probability that CO2 variation is considered high. Although the room size, source location, ventilation rate and location of air supply and extract devices all can have influence on the CO2 distribution, this article gives general guidelines on the optimum positioning of CO2 sensor in a room.
Resumo:
OBJECTIVE: The present study was carried out to investigate effects of meals, rich in either saturated fatty acids (SFA), or n-6 or n-3 fatty acids, on postprandial plasma lipid and hormone concentrations as well as post-heparin plasma lipoprotein lipase (LPL) activity. DESIGN: The study was a randomized single-blind study comparing responses to three test meals. SETTING: The volunteers attended the Clinical Investigation Unit of the Royal Surrey County Hospital on three separate occasions in order to consume the meals. SUBJECTS: Twelve male volunteers with an average age of 22.5 +/- 1.4 years (mean +/- SD), were selected from the University of Surrey student population; one subject dropped out of the study because he found the test meal unpalatable. INTERVENTIONS: Three meals were given in the early evening and postprandial responses were followed overnight for 11h. The oils used to prepare each of the three test meals were: a mixed oil rich in saturated fatty acids (SFA) which mimicked the fatty acid composition of the current UK diet, corn oil, rich in n-6 fatty acids and a fish oil concentrate (MaxEPA) rich in n-3 fatty acids. The oil under investigation (40 g) was incorporated into the test meals which were otherwise identical [208 g carbohydrates, 35 g protein, 5.65 MJ (1350 kcal) energy]. Postprandial plasma triacylglycerol (TAG), gastric inhibitory polypeptide (GIP), and insulin responses, as well as post-heparin LPL activity (measured at 12 h postprandially only) were investigated. RESULTS: Fatty acids of the n-3 series significantly reduced plasma TAG responses compared to the mixed oil meal (P < 0.05) and increased post-heparin LPL activity 15 min after the injection of heparin (P < 0.01). A biphasic response was observed in TAG, with peak responses occurring at 1 h and between 3-7 h postprandially. GIP and insulin showed similar responses to the three test meals and no significant differences were observed. CONCLUSION: We conclude that fish oils can decrease postprandial plasma TAG levels partly through an increase in post-heparin LPL activity, which however, is not due to increased GIP or insulin concentrations.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
Purpose – The purpose of this study is to address a recent call for additional research on electronic word-of-mouth (eWOM). In response to this call, this study draws on the social network paradigm and the uses and gratification theory (UGT) to propose and empirically test a conceptual framework of key drivers of two types of eWOM, namely in-group and out-of-group. Design/methodology/approach – The proposed model, which examines the impact of usage motivations on eWOM in-group and eWOM out-of-group, is tested in a sample of 302 internet users in Portugal. Findings – Results from the survey show that the different drivers (i.e. mood-enhancement, escapism, experiential learning and social interaction) vary in terms of their impact on the two different types of eWOM. Surprisingly, while results show a positive relationship between experiential learning and eWOM out-of-group, no relationship is found between experiential learning and eWOM in-group. Research limitations/implications – This is the first study investigating the drivers of both eWOM in-group and eWOM out-of-group. Additional research in this area will contribute to the development of a general theory of eWOM. Practical implications – By understanding the drivers of different eWOM types, this study provides guidance to marketing managers on how to allocate resources more efficiently in order to achieve the company's strategic objectives. Originality/value – No published study has investigated the determinants of these two types of eWOM. This is the first study offering empirical considerations of how the various drivers differentially impact eWOM in-group and eWOM out-of-group.
Resumo:
Purpose The relative efficiency of different eye exercise regimes is unclear, and in particular the influences of practice, placebo and the amount of effort required are rarely considered. This study measured conventional clinical measures after different regimes in typical young adults. Methods 156 asymptomatic young adults were directed to carry out eye exercises 3 times daily for two weeks. Exercises were directed at improving blur responses (accommodation), disparity responses (convergence), both in a naturalistic relationship, convergence in excess of accommodation, accommodation in excess of convergence, and a placebo regime. They were compared to two control groups, neither of which were given exercises, but the second of which were asked to make maximum effort during the second testing. Results Instruction set and participant effort were more effective than many exercises. Convergence exercises independent of accommodation were the most effective treatment, followed by accommodation exercises, and both regimes resulted in changes in both vergence and accommodation test responses. Exercises targeting convergence and accommodation working together were less effective than those where they were separated. Accommodation measures were prone to large instruction/effort effects and monocular accommodation facility was subject to large practice effects. Conclusions Separating convergence and accommodation exercises seemed more effective than exercising both systems concurrently and suggests that stimulation of accommodation and convergence may act in an additive fashion to aid responses. Instruction/effort effects are large and should be carefully controlled if claims for the efficacy of any exercise regime are to be made.
Resumo:
In recent years, computational fluid dynamics (CFD) has been widely used as a method of simulating airflow and addressing indoor environment problems. The complexity of airflows within the indoor environment would make experimental investigation difficult to undertake and also imposes significant challenges on turbulence modelling for flow prediction. This research examines through CFD visualization how air is distributed within a room. Measurements of air temperature and air velocity have been performed at a number of points in an environmental test chamber with a human occupant. To complement the experimental results, CFD simulations were carried out and the results enabled detailed analysis and visualization of spatial distribution of airflow patterns and the effect of different parameters to be predicted. The results demonstrate the complexity of modelling human exhalation within a ventilated enclosure and shed some light into how to achieve more realistic predictions of the airflow within an occupied enclosure.
Resumo:
Purpose– The purpose of this study is to address a recent call for additional research on electronic word‐of‐mouth (eWOM). In response to this call, this study draws on the social network paradigm and the uses and gratification theory (UGT) to propose and empirically test a conceptual framework of key drivers of two types of eWOM, namely in‐group and out‐of‐group. Design/methodology/approach– The proposed model, which examines the impact of usage motivations on eWOM in‐group and eWOM out‐of‐group, is tested in a sample of 302 internet users in Portugal. Findings– Results from the survey show that the different drivers (i.e. mood‐enhancement, escapism, experiential learning and social interaction) vary in terms of their impact on the two different types of eWOM. Surprisingly, while results show a positive relationship between experiential learning and eWOM out‐of‐group, no relationship is found between experiential learning and eWOM in‐group. Research limitations/implications– This is the first study investigating the drivers of both eWOM in‐group and eWOM out‐of‐group. Additional research in this area will contribute to the development of a general theory of eWOM. Practical implications– By understanding the drivers of different eWOM types, this study provides guidance to marketing managers on how to allocate resources more efficiently in order to achieve the company's strategic objectives. Originality/value– No published study has investigated the determinants of these two types of eWOM. This is the first study offering empirical considerations of how the various drivers differentially impact eWOM in‐group and eWOM out‐of‐group.
Resumo:
Short-term memory (STM) impairments are prevalent in adults with acquired brain injuries. While there are several published tests to assess these impairments, the majority require speech production, e.g. digit span (Wechsler, 1987). This feature may make them unsuitable for people with aphasia and motor speech disorders because of word finding difficulties and speech demands respectively. If patients perceive the speech demands of the test to be high, the may not engage with testing. Furthermore, existing STM tests are mainly ‘pen-and-paper’ tests, which can jeopardise accuracy. To address these shortcomings, we designed and standardised a novel computerised test that does not require speech output and because of the computerised delivery it would enable clinicians identify STM impairments with greater precision than current tests. The matching listening span tasks, similar to the non-normed PALPA 13 (Kay, Lesser & Coltheart, 1992) is used to test short-term memory for serial order of spoken items. Sequences of digits are presented in pairs. The person hears the first sequence, followed by the second sequence and s/he decides whether the two sequences are the same or different. In the computerised test, the sequences are presented in live voice recordings on a portable computer through a software application (Molero Martin, Laird, Hwang & Salis 2013). We collected normative data from healthy older adults (N=22-24) using digits, real words (one- and two-syllables) and non-words (one- and two- syllables). Their performance was scored following two systems. The Highest Span system was the highest span length (e.g. 2-8) at which a participant correctly responded to over 7 out of 10 trials at the highest sequence length. Test re-test reliability was also tested in a subgroup of participants. The test will be available as free of charge for clinicians and researchers to use.
Resumo:
The Boyadjian et al dental wash technique provides, in certain contexts, the only chance to analyze and quantify the use of plants by past populations and is therefore an important milestone for the reconstruction of paleodiet. With this paper we present recent investigations and results upon the influence of this method on teeth. A series of six teeth from a three thousand years old Brazilian shellmound (Jabuticabeira II) was examined before and after dental wash. The main focus was documenting the alteration of the surfaces and microstructures. The status of all teeth were documented using macrophotography, optical light microscopy, and atmospheric Secondary Electron Microscopy (aSEM) prior and after applying the dental wash technique. The comparison of pictures taken before and after dental wash showed the different degrees of variation and damage done to the teeth but, also, provided additional information about microstructures, which have not been visible before. Consequently we suggest that dental wash should only be carried out, if absolutely necessary, after dental pathology, dental morphology and microwear studies have been accomplished. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Test is an area in system development. Test can be performed manually or automated. Test activities can be supported by Word documents and Excel sheets for documenting and executing test cases and as well for follow up, but there are also new test tools designed to support and facilitate the testing process and the activities of the test. This study has described manual test and identified strengths and weaknesses of manual testing with a testing tool called Microsoft Test Manager (MTM) and of manual testing using test cases and test log templates developed by the testers at Sogeti. The result that emerged from the problem and strength analysis and the analysis of literature studies and firsthand experiences (in terms of creating, documenting and executing test cases) addresses the issue of the following weaknesses and strengths. Strengths of the test tool is that it contains needed functionality all in one place and it is available when needed without having to open up other programs which saves many steps of activity. Strengths with test without the support of test tools is mainly that it is easy to learn and gives a good overview, easy to format text as desired and flexible to changes during execution of a test case. Weaknesses in test with the support of test tools include that it is difficult to get a good overview of the entire test case, that it is not possible to format the text in the test steps. It is as well not possible to modify the test steps during execution. It is also difficult to use some of the test design techniques of TMap, for example a checklist, when using the test tool MTM. Weaknesses with test without the support of the testing tool MTM is that the tester gets many more steps of activities to do compared to doing the same activities with the support of the testing tool MTM. There is more to remember because the documents the tester use are not directly linked. Altogether the strengths of the test tool stands out when it comes to supporting the testing process.
Resumo:
An international standard, ISO/DP 9459-4 has been proposed to establish a uniform standard of quality for small, factory-made solar heating systerns. In this proposal, system components are tested separatelyand total system performance is calculated using system simulations based on component model parameter values validated using the results from the component tests. Another approach is to test the whole system in operation under representative conditions, where the results can be used as a measure of the general system performance. The advantage of system testing of this form is that it is not dependent on simulations and the possible inaccuracies of the models. Its disadvantage is that it is restricted to the boundary conditions for the test. Component testing and system simulation is flexible, but requires an accurate and reliable simulation model.The heat store is a key component conceming system performance. Thus, this work focuses on the storage system consisting store, electrical auxiliary heater, heat exchangers and tempering valve. Four different storage system configurations with a volume of 750 litre were tested in an indoor system test using a six -day test sequence. A store component test and system simulation was carried out on one of the four configurations, applying the proposed standard for stores, ISO/DP 9459-4A. Three newly developed test sequences for intemalload side heat exchangers, not in the proposed ISO standard, were also carried out. The MULTIPORT store model was used for this work. This paper discusses the results of the indoor system test, the store component test, the validation of the store model parameter values and the system simulations.