880 resultados para Cross-system comparison
Resumo:
Pesticide risk indicators provide simple support in the assessment of environmental and health risks from pesticide use, and can therefore inform policies to foster a sustainable interaction of agriculture with the environment. For their relative simplicity, indicators may be particularly useful under conditions of limited data availability and resources, such as in Less Developed Countries (LDCs). However, indicator complexity can vary significantly, in particular between those that rely on an exposure–toxicity ratio (ETR) and those that do not. In addition, pesticide risk indicators are usually developed for Western contexts, which might cause incorrect estimation in LDCs. This study investigated the appropriateness of seven pesticide risk indicators for use in LDCs, with reference to smallholding agriculture in Colombia. Seven farm-level indicators, among which 3 relied on an ETR (POCER, EPRIP, PIRI) and 4 on a non-ETR approach (EIQ, PestScreen, OHRI, Dosemeci et al., 2002), were calculated and then compared by means of the Spearman rank correlation test. Indicators were also compared with respect to key indicator characteristics, i.e. user friendliness and ability to represent the system under study. The comparison of the indicators in terms of the total environmental risk suggests that the indicators not relying on an ETR approach cannot be used as a reliable proxy for more complex, i.e. ETR, indicators. ETR indicators, when user-friendly, show a comparative advantage over non-ETR in best combining the need for a relatively simple tool to be used in contexts of limited data availability and resources, and for a reliable estimation of environmental risk. Non-ETR indicators remain useful and accessible tools to discriminate between different pesticides prior to application. Concerning the human health risk, simple algorithms seem more appropriate for assessing human health risk in LDCs. However, further research on health risk indicators and their validation under LDC conditions is needed.
Resumo:
A Bond Graph is a graphical modelling technique that allows the representation of energy flow between the components of a system. When used to model power electronic systems, it is necessary to incorporate bond graph elements to represent a switch. In this paper, three different methods of modelling switching devices are compared and contrasted: the Modulated Transformer with a binary modulation ratio (MTF), the ideal switch element, and the Switched Power Junction (SPJ) method. These three methods are used to model a dc-dc Boost converter and then run simulations in MATLAB/SIMULINK. To provide a reference to compare results, the converter is also simulated using PSPICE. Both quantitative and qualitative comparisons are made to determine the suitability of each of the three Bond Graph switch models in specific power electronics applications
Resumo:
The prediction of Northern Hemisphere (NH) extratropical cyclones by nine different ensemble prediction systems(EPSs), archived as part of The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE), has recently been explored using a cyclone tracking approach. This paper provides a continuation of this work, extending the analysis to the Southern Hemisphere (SH). While the EPSs have larger error in all cyclone properties in the SH, the relative performance of the different EPSs remains broadly consistent between the two hemispheres. Some interesting differences are also shown. The Chinese Meteorological Administration (CMA) EPS has a significantly lower level of performance in the SH compared to the NH. Previous NH results showed that the Centro de Previsao de Tempo e Estudos Climaticos (CPTEC) EPS underpredicts cyclone intensity. The results of this current study show that this bias is significantly larger in the SH. The CPTEC EPS also has very little spread in both hemispheres. As with the NH results, cyclone propagation speed is underpredicted by all the EPSs in the SH. To investigate this further, the bias was also computed for theECMWFhigh-resolution deterministic forecast. The bias was significantly smaller than the lower resolution ECMWF EPS.
Resumo:
Housebuilding is frequently viewed as an industry full of small firms. However, large firms exist in many countries. Here, a comparative analysis is made of the housebuilding industries in Australia, Britain and the USA. Housebuilding output is found to be much higher in Australia and the USA than in Britain when measured on a per capita basis. At the same time, the degree of market concentration in Australia and the USA is relatively low but in Britain it is far greater, with a few firms having quite substantial market shares. Investigation of the size distribution of the top 100 or so firms ranked by output also shows that the decline in firm size from the largest downwards is more rapid in Britain than elsewhere. The exceptionalism of the British case is put down to two principal reasons. First, the close proximity of Britain’s regions enables housebuilders to diversify successfully across different markets. The gains from such diversification are best achieved by large firms, because they can gain scale benefits in any particular market segment. Second, land shortages induced by a restrictive planning system encourage firms to takeover each other as a quick and beneficial means of acquiring land. The institutional rules of planning also make it difficult for new entrants to come in at the bottom end of the size hierarchy. In this way, concentration grows and a handful of large producers emerge. These conditions do not hold in the other two countries, so their industries are less concentrated. Given the degree of rivalry between firms over land purchases and takeovers, it is difficult to envisage them behaving in a long-term collusive manner, so that competition in British housebuilding is probably not unduly compromised by the exceptional degree of firm concentration. Reforms to lower the restrictions, improve the slow responsiveness and reduce the uncertainties associated with British planning systems’ role in housing supply are likely to greatly improve the ability of new firms to enter housebuilding and all firms’ abilities to increase output in response to rising housing demand. Such reforms would also probably lower overall housebuilding firm concentration over time.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.
Resumo:
Some of the techniques used to model nitrogen (N) and phosphorus (P) discharges from a terrestrial catchment to an estuary are discussed and applied to the River Tamar and Tamar Estuary system in Southwest England, U.K. Data are presented for dissolved inorganic nutrient concentrations in the Tamar Estuary and compared with those from the contrasting, low turbidity and rapidly flushed Tweed Estuary in Northeast England. In the Tamar catchment, simulations showed that effluent nitrate loads for typical freshwater flows contributed less than 1% of the total N load. The effect of effluent inputs on ammonium loads was more significant (∼10%). Cattle, sheep and permanent grassland dominated the N catchment export, with diffuse-source N export greatly dominating that due to point sources. Cattle, sheep, permanent grassland and cereal crops generated the greatest rates of diffuse-source P export. This reflected the higher rates of P fertiliser applications to arable land and the susceptibility of bare, arable land to P export in wetter winter months. N and P export to the Tamar Estuary from human sewage was insignificant. Non-conservative behaviour of phosphate was particularly marked in the Tamar Estuary. Silicate concentrations were slightly less than conservative levels, whereas nitrate was essentially conservative. The coastal sea acted as a sink for these terrestrially derived nutrients. A pronounced sag in dissolved oxygen that was associated with strong nitrite and ammonium peaks occurred in the turbidity maximum region of the Tamar Estuary. Nutrient behaviour within the Tweed was very different. The low turbidity and rapid flushing ensured that nutrients there were essentially conservative, so that flushing of nutrients to the coastal zone from the river occurred with little estuarine modification.
Resumo:
In this paper, numerical analyses of the thermal performance of an indirect evaporative air cooler incorporating a M-cycle cross-flow heat exchanger has been carried out. The numerical model was established from solving the coupled governing equations for heat and mass transfer between the product and working air, using the finite-element method. The model was developed using the EES (Engineering Equation Solver) environment and validated by published experimental data. Correlation between the cooling (wet-bulb) effectiveness, system COP and a number of air flow/exchanger parameters was developed. It is found that lower channel air velocity, lower inlet air relative humidity, and higher working-to-product air ratio yielded higher cooling effectiveness. The recommended average air velocities in dry and wet channels should not be greater than 1.77 m/s and 0.7 m/s, respectively. The optimum flow ratio of working-to-product air for this cooler is 50%. The channel geometric sizes, i.e. channel length and height, also impose significant impact to system performance. Longer channel length and smaller channel height contribute to increase of the system cooling effectiveness but lead to reduced system COP. The recommend channel height is 4 mm and the dimensionless channel length, i.e., ratio of the channel length to height, should be in the range 100 to 300. Numerical study results indicated that this new type of M-cycle heat and mass exchanger can achieve 16.7% higher cooling effectiveness compared with the conventional cross-flow heat and mass exchanger for the indirect evaporative cooler. The model of this kind is new and not yet reported in literatures. The results of the study help with design and performance analyses of such a new type of indirect evaporative air cooler, and in further, help increasing market rating of the technology within building air conditioning sector, which is currently dominated by the conventional compression refrigeration technology.
Resumo:
This paper provides a comparative study of the performance of cross-flow and counter-flow M-cycle heat exchangers for dew point cooling. It is recognised that evaporative cooling systems offer a low energy alternative to conventional air conditioning units. Recently emerged dew point cooling, as the renovated evaporative cooling configuration, is claimed to have much higher cooling output over the conventional evaporative modes owing to use of the M-cycle heat exchangers. Cross-flow and counter-flow heat exchangers, as the available structures for M-cycle dew point cooling processing, were theoretically and experimentally investigated to identify the difference in cooling effectiveness of both under the parallel structural/operational conditions, optimise the geometrical sizes of the exchangers and suggest their favourite operational conditions. Through development of a dedicated computer model and case-by-case experimental testing and validation, a parametric study of the cooling performance of the counter-flow and cross-flow heat exchangers was carried out. The results showed the counter-flow exchanger offered greater (around 20% higher) cooling capacity, as well as greater (15%–23% higher) dew-point and wet-bulb effectiveness when equal in physical size and under the same operating conditions. The cross-flow system, however, had a greater (10% higher) Energy Efficiency (COP). As the increased cooling effectiveness will lead to reduced air volume flow rate, smaller system size and lower cost, whilst the size and cost are the inherent barriers for use of dew point cooling as the alternation of the conventional cooling systems, the counter-flow system is considered to offer practical advantages over the cross-flow system that would aid the uptake of this low energy cooling alternative. In line with increased global demand for energy in cooling of building, largely by economic booming of emerging developing nations and recognised global warming, the research results will be of significant importance in terms of promoting deployment of the low energy dew point cooling system, helping reduction of energy use in cooling of buildings and cut of the associated carbon emission.
Resumo:
G protein-coupled receptors (GPCRs) are expressed throughout the nervous system where they regulate multiple physiological processes, participate in neurological diseases, and are major targets for therapy. Given that many GPCRs respond to neurotransmitters and hormones that are present in the extracellular fluid and which do not readily cross the plasma membrane, receptor trafficking to and from the plasma membrane is a critically important determinant of cellular responsiveness. Moreover, trafficking of GPCRs throughout the endosomal system can initiate signaling events that are mechanistically and functionally distinct from those operating at the plasma membrane. This review discusses recent advances in the relationship between signaling and trafficking of GPCRs in the nervous system. It summarizes how receptor modifications influence trafficking, discusses mechanisms that regulate GPCR trafficking to and from the plasma membrane, reviews the relationship between trafficking and signaling, and considers the implications of GPCR trafficking to drug development.