931 resultados para scenarios
Resumo:
BACKGROUND: Monitoring studies revealed high concentrations of pesticides in the drainage canal of paddy fields. It is important to have a way to predict these concentrations in different management scenarios as an assessment tool. A simulation model for predicting the pesticide concentration in a paddy block (PCPF-B) was evaluated and then used to assess the effect of water management practices for controlling pesticide runoff from paddy fields. RESULTS: The PCPF-B model achieved an acceptable performance. The model was applied to a constrained probabilistic approach using the Monte Carlo technique to evaluate the best management practices for reducing runoff of pretilachlor into the canal. The probabilistic model predictions using actual data of pesticide use and hydrological data in the canal showed that the water holding period (WHP) and the excess water storage depth (EWSD) effectively reduced the loss and concentration of pretilachlor from paddy fields to the drainage canal. The WHP also reduced the timespan of pesticide exposure in the drainage canal. CONCLUSIONS: It is recommended that: (1) the WHP be applied for as long as possible, but for at least 7 days, depending on the pesticide and field conditions; (2) an EWSD greater than 2 cm be maintained to store substantial rainfall in order to prevent paddy runoff, especially during the WHP.
Resumo:
Pesticide use in paddy rice production may contribute to adverse ecological effects in surface waters. Risk assessments conducted for regulatory purposes depend on the use of simulation models to determine predicted environment concentrations (PEC) of pesticides. Often tiered approaches are used, in which assessments at lower tiers are based on relatively simple models with conservative scenarios, while those at higher tiers have more realistic representations of physical and biochemical processes. This chapter reviews models commonly used for predicting the environmental fate of pesticides in rice paddies. Theoretical considerations, unique features, and applications are discussed. This review is expected to provide information to guide model selection for pesticide registration, regulation, and mitigation in rice production areas.
Resumo:
A whole of factory model of a raw sugar factory was developed in SysCAD software to assess and improve factory operations. The integrated sugar factory model ‘Sugar-SysCAD’ includes individual models for milling, heating and clarification, evaporation, crystallisation, steam cycle, sugar dryer and process and injection water circuits. These individual unit operation models can be either used as standalone models to optimise the unit operation or in the integrated mode to provide more accurate prediction of the effects of changes in any part of the process on the outputs of the whole factory process. Using the integrated sugar factory model, the effect of specific process operations can be understood and practical solutions can be determined to address process problems. The paper presents two factory scenarios to show the capabilities of the whole of factory model.
Resumo:
The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.
Resumo:
Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
We report on a plan to establish a `Dictionary of LHC Signatures', an initiative that started at the WHEPP-X workshop in Chennai, January 2008. This study aims at the strategy of distinguishing 3 classes of dark matter motivated scenarios such as R-parity conserved supersymmetry, little Higgs models with T-parity conservation and universal extra dimensions with KK-parity for generic cases of their realization in a wide range of the model space. Discriminating signatures are tabulated and will need a further detailed analysis.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
- Objective Driver sleepiness is a major crash risk factor, but may be under-recognized as a risky driving behavior. Sleepy driving is usually rated as less of a road safety issue than more well-known risky driving behaviors, such as drink driving and speeding. The objective of this study was to compare perception of crash risk of sleepy driving, drink driving, and speeding. - Methods In total, 300 Australian drivers completed a questionnaire that assessed crash risk perceptions for sleepy driving, drink driving, and speeding. Additionally, the participants perception of crash risk was assessed for five different contextual scenarios that included different levels of sleepiness (low, high), driving duration (short, long), and time of day/circadian influences (afternoon, night-time) of driving. - Results The analysis confirmed that sleepy driving was considered a risky driving behavior, but not as risky as high levels of speeding (p < .05). Yet, the risk of crashing at 4 am was considered as equally risky as low levels of speeding (10 km over the limit). The comparisons of the contextual scenarios revealed driving scenarios that would arguably be perceived as quite risky due to time of day/circadian influences were not reported as high risk. - Conclusions The results suggest a lack of awareness or appreciation of circadian rhythm functioning, particularly the descending phase of circadian rhythm that promotes increased sleepiness in the afternoon and during the early hours of the morning. Yet, the results suggested an appreciation of the danger associated with long distance driving and driver sleepiness. Further efforts are required to improve the community’s awareness of the impairing effects from sleepiness and in particular, knowledge regarding the human circadian rhythm and the increased sleep propensity during the circadian nadir.
Resumo:
Cooperation among unrelated individuals is an enduring evolutionary riddle and a number of possible solutions have been suggested. Most of these suggestions attempt to refine cooperative strategies, while little attention is given to the fact that novel defection strategies can also evolve in the population. Especially in the presence of punishment to the defectors and public knowledge of strategies employed by the players, a defecting strategy that avoids getting punished by selectively cooperating only with the punishers can get a selective benefit over non-conditional defectors. Furthermore, if punishment ensures cooperation from such discriminating defectors, defectors who punish other defectors can evolve as well. We show that such discriminating and punishing defectors can evolve in the population by natural selection in a Prisoner’s Dilemma game scenario, even if discrimination is a costly act. These refined defection strategies destabilize unconditional defectors. They themselves are, however, unstable in the population. Discriminating defectors give selective benefit to the punishers in the presence of non-punishers by cooperating with them and defecting with others. However, since these players also defect with other discriminators they suffer fitness loss in the pure population. Among the punishers, punishing cooperators always benefit in contrast to the punishing defectors, as the latter not only defect with other punishing defectors but also punish them and get punished. As a consequence of both these scenarios, punishing cooperators get stabilized in the population. We thus show ironically that refined defection strategies stabilize cooperation. Furthermore, cooperation stabilized by such defectors can work under a wide range of initial conditions and is robust to mistakes.
Resumo:
Commercial environments may receive only a fraction of expected genetic gains for growth rate as predicted from the selection environment This fraction is the result of undesirable genotype-by-environment interactions (G x E) and measured by the genetic correlation (r(g)) of growth between environments. Rapid estimates of genetic correlation achieved in one generation are notoriously difficult to estimate with precision. A new design is proposed where genetic correlations can be estimated by utilising artificial mating from cryopreserved semen and unfertilised eggs stripped from a single female. We compare a traditional phenotype analysis of growth to a threshold model where only the largest fish are genotyped for sire identification. The threshold model was robust to differences in family mortality differing up to 30%. The design is unique as it negates potential re-ranking of families caused by an interaction between common maternal environmental effects and growing environment. The design is suitable for rapid assessment of G x E over one generation with a true 0.70 genetic correlation yielding standard errors as low as 0.07. Different design scenarios were tested for bias and accuracy with a range of heritability values, number of half-sib families created, number of progeny within each full-sib family, number of fish genotyped, number of fish stocked, differing family survival rates and at various simulated genetic correlation levels
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
We consider ranked-based regression models for clustered data analysis. A weighted Wilcoxon rank method is proposed to take account of within-cluster correlations and varying cluster sizes. The asymptotic normality of the resulting estimators is established. A method to estimate covariance of the estimators is also given, which can bypass estimation of the density function. Simulation studies are carried out to compare different estimators for a number of scenarios on the correlation structure, presence/absence of outliers and different correlation values. The proposed methods appear to perform well, in particular, the one incorporating the correlation in the weighting achieves the highest efficiency and robustness against misspecification of correlation structure and outliers. A real example is provided for illustration.
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.