942 resultados para Stochastic simulation algorithm
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
Frequency deviation is a common problem for power system signal processing. Many power system measurements are carried out in a fixed sampling rate assuming the system operates in its nominal frequency (50 or 60 Hz). However, the actual frequency may deviate from the normal value from time to time due to various reasons such as disturbances and subsequent system transients. Measurement of signals based on a fixed sampling rate may introduce errors under such situations. In order to achieve high precision signal measurement appropriate algorithms need to be employed to reduce the impact from frequency deviation in the power system data acquisition process. This paper proposes an advanced algorithm to enhance Fourier transform for power system signal processing. The algorithm is able to effectively correct frequency deviation under fixed sampling rate. Accurate measurement of power system signals is essential for the secure and reliable operation of power systems. The algorithm is readily applicable to such occasions where signal processing is affected by frequency deviation. Both mathematical proof and numerical simulation are given in this paper to illustrate robustness and effectiveness of the proposed algorithm. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A Combined Genetic Algorithm and Method of Moments design methods is presented for the design of unusual near-field antennas for use in Magnetic Resonance Imaging systems. The method is successfully applied to the design of an asymmetric coil structure for use at 190MHz and demonstrates excellent radiofrequency field homogeneity.
Stability and simulation-based design of steel scaffolding without using the effective length method
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In a 2-yr multiple-site field study conducted in western Nebraska during 1999 and 2000, optimum dryland corn (Zea mays L.) population varied from less than 1.7 to more than 5.6 plants m(-2), depending largely on available water resources. The objective of this study was to use a modeling approach to investigate corn population recommendations for a wide range of seasonal variation. A corn growth simulation model (APSIM-maize) was coupled to long-term sequences of historical climatic data from western Nebraska to provide probabilistic estimates of dryland yield for a range of corn populations. Simulated populations ranged from 2 to 5 plants m(-2). Simulations began with one of three levels of available soil water at planting, either 80, 160, or 240 mm in the surface 1.5 m of a loam soil. Gross margins were maximized at 3 plants m(-2) when starting available water was 160 or 240 mm, and the expected probability of a financial loss at this population was reduced from about 10% at 160 mm to 0% at 240 mm. When starting available water was 80 mm, average gross margins were less than $15 ha(-1), and risk of financial loss exceeded 40%. Median yields were greatest when starting available soil water was 240 mm. However, perhaps the greater benefit of additional soil water at planting was reduction in the risk of making a financial loss. Dryland corn growers in western Nebraska are advised to use a population of 3 plants m(-2) as a base recommendation.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Experimental scratch resistance testing provides two numbers: the penetration depth Rp and the healing depth Rh. In molecular dynamics computer simulations, we create a material consisting of N statistical chain segments by polymerization; a reinforcing phase can be included. Then we simulate the movement of an indenter and response of the segments during X time steps. Each segment at each time step has three Cartesian coordinates of position and three of momentum. We describe methods of visualization of results based on a record of 6NX coordinates. We obtain a continuous dependence on time t of positions of each of the segments on the path of the indenter. Scratch resistance at a given location can be connected to spatial structures of individual polymeric chains.
Resumo:
A numeric model has been proposed to investigate the mechanical and electrical properties of a polymeric/carbon nanotube (CNT) composite material subjected to a deformation force. The reinforcing phase affects the behavior of the polymeric matrix and depends on the nanofiber aspect ratio and preferential orientation. The simulations show that the mechanical behavior of a computer generated material (CGM) depends on fiber length and initial orientation in the polymeric matrix. It is also shown how the conductivity of the polymer/CNT composite can be calculated for each time step of applied stress, effectively providing the ability to simulate and predict strain-dependent electrical behavior of CNT nanocomposites.
Resumo:
Many organisations need to extract useful information from huge amounts of movement data. One example is found in maritime transportation, where the automated identification of a diverse range of traffic routes is a key management issue for improving the maintenance of ports and ocean routes, and accelerating ship traffic. This paper addresses, in a first stage, the research challenge of developing an approach for the automated identification of traffic routes based on clustering motion vectors rather than reconstructed trajectories. The immediate benefit of the proposed approach is to avoid the reconstruction of trajectories in terms of their geometric shape of the path, their position in space, their life span, and changes of speed, direction and other attributes over time. For clustering the moving objects, an adapted version of the Shared Nearest Neighbour algorithm is used. The motion vectors, with a position and a direction, are analysed in order to identify clusters of vectors that are moving towards the same direction. These clusters represent traffic routes and the preliminary results have shown to be promising for the automated identification of traffic routes with different shapes and densities, as well as for handling noise data.
Resumo:
Within the development of motor vehicles, crash safety (e.g. occupant protection, pedestrian protection, low speed damageability), is one of the most important attributes. In order to be able to fulfill the increased requirements in the framework of shorter cycle times and rising pressure to reduce costs, car manufacturers keep intensifying the use of virtual development tools such as those in the domain of Computer Aided Engineering (CAE). For crash simulations, the explicit finite element method (FEM) is applied. The accuracy of the simulation process is highly dependent on the accuracy of the simulation model, including the midplane mesh. One of the roughest approximations typically made is the actual part thickness which, in reality, can vary locally. However, almost always a constant thickness value is defined throughout the entire part due to complexity reasons. On the other hand, for precise fracture analysis within FEM, the correct thickness consideration is one key enabler. Thus, availability of per element thickness information, which does not exist explicitly in the FEM model, can significantly contribute to an improved crash simulation quality, especially regarding fracture prediction. Even though the thickness is not explicitly available from the FEM model, it can be inferred from the original CAD geometric model through geometric calculations. This paper proposes and compares two thickness estimation algorithms based on ray tracing and nearest neighbour 3D range searches. A systematic quantitative analysis of the accuracy of both algorithms is presented, as well as a thorough identification of particular geometric arrangements under which their accuracy can be compared. These results enable the identification of each technique’s weaknesses and hint towards a new, integrated, approach to the problem that linearly combines the estimates produced by each algorithm.
Resumo:
Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which an abnormal formation of the rib cage gives the chest a caved-in or sunken appearance. Today, the surgical correction of this deformity is carried out in children and adults through Nuss technic, which consists in the placement of a prosthetic bar under the sternum and over the ribs. Although this technique has been shown to be safe and reliable, not all patients have achieved adequate cosmetic outcome. This often leads to psychological problems and social stress, before and after the surgical correction. This paper targets this particular problem by presenting a method to predict the patient surgical outcome based on pre-surgical imagiologic information and chest skin dynamic modulation. The proposed approach uses the patient pre-surgical thoracic CT scan and anatomical-surgical references to perform a 3D segmentation of the left ribs, right ribs, sternum and skin. The technique encompasses three steps: a) approximation of the cartilages, between the ribs and the sternum, trough b-spline interpolation; b) a volumetric mass spring model that connects two layers - inner skin layer based on the outer pleura contour and the outer surface skin; and c) displacement of the sternum according to the prosthetic bar position. A dynamic model of the skin around the chest wall region was generated, capable of simulating the effect of the movement of the prosthetic bar along the sternum. The results were compared and validated with patient postsurgical skin surface acquired with Polhemus FastSCAN system
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.