195 resultados para Simulation methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several competing methods commonly used to solve energy grained master equations describing gas-phase reactive systems. When it comes to selecting an appropriate method for any particular problem, there is little guidance in the literature. In this paper we directly compare several variants of spectral and numerical integration methods from the point of view of computer time required to calculate the solution and the range of temperature and pressure conditions under which the methods are successful. The test case used in the comparison is an important reaction in combustion chemistry and incorporates reversible and irreversible bimolecular reaction steps as well as isomerizations between multiple unimolecular species. While the numerical integration of the ODE with a stiff ODE integrator is not the fastest method overall, it is the fastest method applicable to all conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In modern magnetic resonance imaging (MRI), patients are exposed to strong, nonuniform static magnetic fields outside the central imaging region, in which the movement of the body may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced electric fields and currents in the patient when moving into the MRI scanner and also for head motion at various positions in the magnet. The numerical calculations are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively shielded 4T magnet system are used and the body model projected through the field profile with a range of velocities. The simulation shows that it possible to induce electric fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are extrapolated to very high field strengths and tabulated data shows the expected induced currents and fields with both movement velocity and field strength. (C) 2003 Elsevier Science (USA). All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a 2-yr multiple-site field study conducted in western Nebraska during 1999 and 2000, optimum dryland corn (Zea mays L.) population varied from less than 1.7 to more than 5.6 plants m(-2), depending largely on available water resources. The objective of this study was to use a modeling approach to investigate corn population recommendations for a wide range of seasonal variation. A corn growth simulation model (APSIM-maize) was coupled to long-term sequences of historical climatic data from western Nebraska to provide probabilistic estimates of dryland yield for a range of corn populations. Simulated populations ranged from 2 to 5 plants m(-2). Simulations began with one of three levels of available soil water at planting, either 80, 160, or 240 mm in the surface 1.5 m of a loam soil. Gross margins were maximized at 3 plants m(-2) when starting available water was 160 or 240 mm, and the expected probability of a financial loss at this population was reduced from about 10% at 160 mm to 0% at 240 mm. When starting available water was 80 mm, average gross margins were less than $15 ha(-1), and risk of financial loss exceeded 40%. Median yields were greatest when starting available soil water was 240 mm. However, perhaps the greater benefit of additional soil water at planting was reduction in the risk of making a financial loss. Dryland corn growers in western Nebraska are advised to use a population of 3 plants m(-2) as a base recommendation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with methods for refinement of specifications written using a combination of Object-Z and CSP. Such a combination has proved to be a suitable vehicle for specifying complex systems which involve state and behaviour, and several proposals exist for integrating these two languages. The basis of the integration in this paper is a semantics of Object-Z classes identical to CSP processes. This allows classes specified in Object-Z to be combined using CSP operators. It has been shown that this semantic model allows state-based refinement relations to be used on the Object-Z components in an integrated Object-Z/CSP specification. However, the current refinement methodology does not allow the structure of a specification to be changed in a refinement, whereas a full methodology would, for example, allow concurrency to be introduced during the development life-cycle. In this paper, we tackle these concerns and discuss refinements of specifications written using Object-Z and CSP where we change the structure of the specification when performing the refinement. In particular, we develop a set of structural simulation rules which allow single components to be refined to more complex specifications involving CSP operators. The soundness of these rules is verified against the common semantic model and they are illustrated via a number of examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulation of the transport of methane in cylindrical silica mesopores have been performed using equilibrium and nonequilibrium molecular dynamics (NEMD) as well as dual control volume grand canonical molecular dynamics methods. It is demonstrated that all three techniques yield the same transport coefficient even in the presence of viscous flow. A modified locally averaged density model for viscous flow, combined with consideration of wall slip through a frictional condition, gives a convincing interpretation of the variation of the transport coefficient over a wide range of densities, and for various pore sizes and temperatures. Wall friction coefficients extracted from NEMD simulations are found to be consistent with momentum transfer arguments, and the approach is shown to be more meaningful than the classical slip length concept. (C) 2003 American Institute of Physics.