991 resultados para Sequential indicator simulation
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
The objective of this study was to improve the simulation of node number in soybean cultivars with determinate stem habits. A nonlinear model considering two approaches to input daily air temperature data (daily mean temperature and daily minimum/maximum air temperatures) was used. The node number on the main stem data of ten soybean cultivars was collected in a three-year field experiment (from 2004/2005 to 2006/2007) at Santa Maria, RS, Brazil. Node number was simulated using the Soydev model, which has a nonlinear temperature response function [f(T)]. The f(T) was calculated using two methods: using daily mean air temperature calculated as the arithmetic average among daily minimum and maximum air temperatures (Soydev tmean); and calculating an f(T) using minimum air temperature and other using maximum air temperature and then averaging the two f(T)s (Soydev tmm). Root mean square error (RMSE) and deviations (simulated minus observed) were used as statistics to evaluate the performance of the two versions of Soydev. Simulations of node number in soybean were better with the Soydev tmm version, with a 0.5 to 1.4 node RMSE. Node number can be simulated for several soybean cultivars using only one set of model coefficients, with a 0.8 to 2.4 node RMSE.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.
Resumo:
The model plant Arabidopsis thaliana was studied for the search of new metabolites involved in wound signalling. Diverse LC approaches were considered in terms of efficiency and analysis time and a 7-min gradient on a UPLC-TOF-MS system with a short column was chosen for metabolite fingerprinting. This screening step was designed to allow the comparison of a high number of samples over a wide range of time points after stress induction in positive and negative ionisation modes. Thanks to data treatment, clear discrimination was obtained, providing lists of potential stress-induced ions. In a second step, the fingerprinting conditions were transferred to longer column, providing a higher peak capacity able to demonstrate the presence of isomers among the highlighted compounds.
Resumo:
Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.
Resumo:
Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.
Resumo:
To support the analysis of driver behavior at rural freeway work zone lane closure merge points, Center for Transportation Research and Education staff collected traffic data at merge areas using video image processing technology. The collection of data and the calculation of the capacity of lane closures are reported in a companion report, "Traffic Management Strategies for Merge Areas in Rural Interstate Work Zones". These data are used in the work reported in this document and are used to calibrate a microscopic simulation model of a typical, Iowa rural freeway lane closure. The model developed is a high fidelity computer simulation with an animation interface. It simulates traffic operations at a work zone lane closure. This model enables traffic engineers to visually demonstrate the forecasted delay that is likely to result when freeway reconstruction makes it necessary to close freeway lanes. Further, the model is also sensitive to variations in driver behavior and is used to test the impact of slow moving vehicles and other driver behaviors. This report consists of two parts. The first part describes the development of the work zone simulation model. The simulation analysis is calibrated and verified through data collected at a work zone in Interstate Highway 80 in Scott County, Iowa. The second part is a user's manual for the simulation model, which is provided to assist users with its set up and operation. No prior computer programming skills are required to use the simulation model.
Resumo:
The objective of this work was to parameterize, calibrate, and validate a new version of the soybean growth and yield model developed by Sinclair, under natural field conditions in northeastern Amazon. The meteorological data and the values of soybean growth and leaf area were obtained from an agrometeorological experiment carried out in Paragominas, PA, Brazil, from 2006 to 2009. The climatic conditions during the experiment were very distinct, with a slight reduction in rainfall in 2007, due to the El Niño phenomenon. There was a reduction in the leaf area index (LAI) and in biomass production during this year, which was reproduced by the model. The simulation of the LAI had root mean square error (RMSE) of 0.55 to 0.82 m² m-2, from 2006 to 2009. The simulation of soybean yield for independent data showed a RMSE of 198 kg ha-1, i.e., an overestimation of 3%. The model was calibrated and validated for Amazonian climatic conditions, and can contribute positively to the improvement of the simulations of the impacts of land use change in the Amazon region. The modified version of the Sinclair model is able to adequately simulate leaf area formation, total biomass, and soybean yield, under northeastern Amazon climatic conditions.
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides presented by class I major histocompatibility complexes (MHCs) is the determining event in the specific cellular immune response against virus-infected cells or tumor cells. It is of great interest, therefore, to elucidate the molecular principles upon which the selectivity of a TCR is based. These principles can in turn be used to design therapeutic approaches, such as peptide-based immunotherapies of cancer. In this study, free energy simulation methods are used to analyze the binding free energy difference of a particular TCR (A6) for a wild-type peptide (Tax) and a mutant peptide (Tax P6A), both presented in HLA A2. The computed free energy difference is 2.9 kcal/mol, in good agreement with the experimental value. This makes possible the use of the simulation results for obtaining an understanding of the origin of the free energy difference which was not available from the experimental results. A free energy component analysis makes possible the decomposition of the free energy difference between the binding of the wild-type and mutant peptide into its components. Of particular interest is the fact that better solvation of the mutant peptide when bound to the MHC molecule is an important contribution to the greater affinity of the TCR for the latter. The results make possible identification of the residues of the TCR which are important for the selectivity. This provides an understanding of the molecular principles that govern the recognition. The possibility of using free energy simulations in designing peptide derivatives for cancer immunotherapy is briefly discussed.
Resumo:
We present a computer-simulation study of the effect of the distribution of energy barriers in an anisotropic magnetic system on the relaxation behavior of the magnetization. While the relaxation law for the magnetization can be approximated in all cases by a time logarithmic decay, the law for the dependence of the magnetic viscosity with temperature is found to be quite sensitive to the shape of the distribution of barriers. The low-temperature region for the magnetic viscosity never extrapolates to a positive no-null value. Moreover our computer simulation results agree reasonably well with some recent relaxation experiments on highly anisotropic single-domain particles.
Resumo:
BACKGROUND: Letrozole radiosensitises breast cancer cells in vitro. In clinical settings, no data exist for the combination of letrozole and radiotherapy. We assessed concurrent and sequential radiotherapy and letrozole in the adjuvant setting. METHODS: This phase 2 randomised trial was undertaken in two centres in France and one in Switzerland between Jan 12, 2005, and Feb 21, 2007. 150 postmenopausal women with early-stage breast cancer were randomly assigned after conserving surgery to either concurrent radiotherapy and letrozole (n=75) or sequential radiotherapy and letrozole (n=75). Randomisation was open label with a minimisation technique, stratified by investigational centres, chemotherapy (yes vs no), radiation boost (yes vs no), and value of radiation-induced lymphocyte apoptosis (< or = 16% vs >16%). Whole breast was irradiated to a total dose of 50 Gy in 25 fractions over 5 weeks. In the case of supraclavicular and internal mammary node irradiation, the dose was 44-50 Gy. Letrozole was administered orally once daily at a dose of 2.5 mg for 5 years (beginning 3 weeks pre-radiotherapy in the concomitant group, and 3 weeks post-radiotherapy in the sequential group). The primary endpoint was the occurrence of acute (during and within 6 weeks of radiotherapy) and late (within 2 years) radiation-induced grade 2 or worse toxic effects of the skin. Analyses were by intention to treat. This study is registered with ClinicalTrials.gov, number NCT00208273. FINDINGS: All patients were analysed apart from one in the concurrent group who withdrew consent before any treatment. During radiotherapy and within the first 12 weeks after radiotherapy, 31 patients in the concurrent group and 31 in the sequential group had any grade 2 or worse skin-related toxicity. The most common skin-related adverse event was dermatitis: four patients in the concurrent group and six in the sequential group had grade 3 acute skin dermatitis during radiotherapy. At a median follow-up of 26 months (range 3-40), two patients in each group had grade 2 or worse late effects (both radiation-induced subcutaneous fibrosis). INTERPRETATION: Letrozole can be safely delivered shortly after surgery and concomitantly with radiotherapy. Long-term follow-up is needed to investigate cardiac side-effects and cancer-specific outcomes. FUNDING: Novartis Oncology France.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.