946 resultados para Over-dispersion, Crash prediction, Bayesian method, Intersection safety
Resumo:
[EN]A new methodology for wind field simulation or forecasting over complex terrain is introduced. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model. The method has been recently implemented in the freely-available Wind3D code. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in. HARMONIE provides wind prediction with a maximum resolution about 1 Km that is refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Background and aims: Sorafenib is the reference therapy for advanced Hepatocellular Carcinoma (HCC). No method exists to predict in the very early period subsequent individual response. Starting from the clinical experience in humans that subcutaneous metastases may rapidly change consistency under sorafenib and that elastosonography a new ultrasound based technique allows assessment of tissue stiffness, we investigated the role of elastonography in the very early prediction of tumor response to sorafenib in a HCC animal model. Methods: HCC (Huh7 cells) subcutaneous xenografting in mice was utilized. Mice were randomized to vehicle or treatment with sorafenib when tumor size was 5-10 mm. Elastosonography (Mylab 70XVG, Esaote, Genova, Italy) of the whole tumor mass on a sagittal plane with a 10 MHz linear transducer was performed at different time points from treatment start (day 0, +2, +4, +7 and +14) until mice were sacrified (day +14), with the operator blind to treatment. In order to overcome variability in absolute elasticity measurement when assessing changes over time, values were expressed in arbitrary units as relative stiffness of the tumor tissue in comparison to the stiffness of a standard reference stand-off pad lying on the skin over the tumor. Results: Sor-treated mice showed a smaller tumor size increase at day +14 in comparison to vehicle-treated (tumor volume increase +192.76% vs +747.56%, p=0.06). Among Sor-treated tumors, 6 mice showed a better response to treatment than the other 4 (increase in volume +177% vs +553%, p=0.011). At day +2, median tumor elasticity increased in Sor-treated group (+6.69%, range –30.17-+58.51%), while decreased in the vehicle group (-3.19%, range –53.32-+37.94%) leading to a significant difference in absolute values (p=0.034). From this time point onward, elasticity decreased in both groups, with similar speed over time, not being statistically different anymore. In Sor-treated mice all 6 best responders at day 14 showed an increase in elasticity at day +2 (ranging from +3.30% to +58.51%) in comparison to baseline, whereas 3 of the 4 poorer responders showed a decrease. Interestingly, these 3 tumours showed elasticity values higher than responder tumours at day 0. Conclusions: Elastosonography appears a promising non-invasive new technique for the early prediction of HCC tumor response to sorafenib. Indeed, we proved that responder tumours are characterized by an early increase in elasticity. The possibility to distinguish a priori between responders and non responders based on the higher elasticity of the latter needs to be validated in ad-hoc experiments as well as a confirmation of our results in humans is warranted.
Resumo:
A highly dangerous situations for tractor driver is the lateral rollover in operating conditions. Several accidents, involving tractor rollover, have indeed been encountered, requiring the design of a robust Roll-Over Protective Structure (ROPS). The aim of the thesis was to evaluate tractor behaviour in the rollover phase so as to calculate the energy absorbed by the ROPS to ensure driver safety. A Mathematical Model representing the behaviour of a generic tractor during a lateral rollover, with the possibility of modifying the geometry, the inertia of the tractor and the environmental boundary conditions, is proposed. The purpose is to define a method allowing the prediction of the elasto-plastic behaviour of the subsequent impacts occurring in the rollover phase. A tyre impact model capable of analysing the influence of the wheels on the energy to be absorbed by the ROPS has been also developed. Different tractor design parameters affecting the rollover behaviour, such as mass and dimensions, have been considered. This permitted the evaluation of their influence on the amount of energy to be absorbed by the ROPS. The mathematical model was designed and calibrated with respect to the results of actual lateral upset tests carried out on a narrow-track tractor. The dynamic behaviour of the tractor and the energy absorbed by the ROPS, obtained from the actual tests, showed to match the results of the model developed. The proposed approach represents a valuable tool in understanding the dynamics (kinetic energy) and kinematics (position, velocity, angular velocity, etc.) of the tractor in the phases of lateral rollover and the factors mainly affecting the event. The prediction of the amount of energy to be absorbed in some cases of accident is possible with good accuracy. It can then help in designing protective structures or active security devices.
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
BACKGROUND: Periodontitis has been associated with cardiovascular disease. We assess if the recurrence of acute coronary syndrome (ACS) could be predicted by preceding medical and periodontal conditions. METHODS: A total of 165 consecutive subjects with ACS and 159 medically healthy, matched control subjects were examined and followed for 3 years. Periodontitis was defined by alveolar bone loss. Subgingival microbial samples were studied by the checkerboard DNA-DNA hybridization method. RESULTS: The recurrence of ACS was found in 66 of 165 (40.0%) subjects, and a first ACS event was found in seven of 159 (4.4%) subjects among baseline control subjects. Subjects who later had a second ACS event were older (P <0.001). Significantly higher serum levels of high-density lipoprotein (P <0.05), creatinine (P <0.01), and white blood cell (WBC) counts (P <0.001) were found in subjects with future ACS. Periodontitis was associated with a first event of ACS (crude odds ratio [OR]: 10.3:1; 95% confidence interval [CI]: 6.1 to 17.4; P <0.001) and the recurrence of ACS (crude OR: 3.6:1; 95% CI: 2.0 to 6.6; P <0.001). General linear modeling multivariate analysis, controlling for age and the prediction of a future ACS event, identified that WBC counts (F = 20.6; P <0.001), periodontitis (F = 17.6; P <0.001), and serum creatinine counts (F = 4.5; P <0.05) were explanatory of a future ACS event. CONCLUSIONS: The results of this study indicate that recurrent ACS events are predicted by serum WBC counts, serum creatinine levels, and a diagnosis of periodontitis. Significantly higher counts of putative pathogens are found in subjects with ACS, but these counts do not predict future ACS events.
Resumo:
The occupant impact velocity (OIV) and acceleration severity index (ASI) are competing measures of crash severity used to assess occupant injury risk in full-scale crash tests involving roadside safety hardware, e.g. guardrail. Delta-V, or the maximum change in vehicle velocity, is the traditional metric of crash severity for real world crashes. This study compares the ability of the OIV, ASI, and delta-V to discriminate between serious and non-serious occupant injury in real world frontal collisions. Vehicle kinematics data from event data recorders (EDRs) were matched with detailed occupant injury information for 180 real world crashes. Cumulative probability of injury risk curves were generated using binary logistic regression for belted and unbelted data subsets. By comparing the available fit statistics and performing a separate ROC curve analysis, the more computationally intensive OIV and ASI were found to offer no significant predictive advantage over the simpler delta-V.
Resumo:
Previous research conducted in the late 1980’s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over twenty-five years old, the data used in the previous research is no longer representative of the currently installed barriers or US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. The analysis included 1,383 (596,331 weighted) real-world barrier midsection impacts selected from thirteen years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS). For each suitable case, the scene diagram and available scene photographs were used to determine roadside and barrier specific variables not available in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors toward secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of seven compared to cases with no second event present. Twenty-four full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from NCHRP Report 350. It was found that the NCHRP Report 350 exit angle criterion alone was not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Genome predictions based on selected genes would be a very welcome approach for taxonomic studies, including DNA-DNA similarity, G+C content and representative phylogeny of bacteria. At present, DNA-DNA hybridizations are still considered the gold standard in species descriptions. However, this method is time-consuming and troublesome, and datasets can vary significantly between experiments as well as between laboratories. For the same reasons, full matrix hybridizations are rarely performed, weakening the significance of the results obtained. The authors established a universal sequencing approach for the three genes recN, rpoA and thdF for the Pasteurellaceae, and determined if the sequences could be used for predicting DNA-DNA relatedness within the family. The sequence-based similarity values calculated using a previously published formula proved most useful for species and genus separation, indicating that this method provides better resolution and no experimental variation compared to hybridization. By this method, cross-comparisons within the family over species and genus borders easily become possible. The three genes also serve as an indicator of the genome G+C content of a species. A mean divergence of around 1 % was observed from the classical method, which in itself has poor reproducibility. Finally, the three genes can be used alone or in combination with already-established 16S rRNA, rpoB and infB gene-sequencing strategies in a multisequence-based phylogeny for the family Pasteurellaceae. It is proposed to use the three sequences as a taxonomic tool, replacing DNA-DNA hybridization.
Resumo:
In a 5-year study involving 119 postmenopausal women, zoledronic acid 4 mg given once-yearly for 2, 3 or 5 years was well tolerated with no evidence of excessive bone turnover reduction or any safety signals. BMD increased significantly. Bone turnover markers decreased from baseline and were maintained within premenopausal reference ranges. INTRODUCTION: After completion of the core study, two consecutive, 2-year, open-label extensions investigated the efficacy and safety of zoledronic acid 4 mg over 5 years in postmenopausal osteoporosis. METHODS: In the core study, patients received 1 to 4 mg zoledronic acid or placebo. In the first extension, most patients received 4 mg per year and then patients entered the second extension and received 4 mg per year or calcium only. Patients were divided into three subgroups according to years of active treatment received (2, 3 or 5 years). Changes in BMD and bone turnover markers (bone ALP and CTX-I) were assessed. RESULTS: All subgroups showed substantial increases in BMD and decreases in bone markers. By the end of the core study, 37.5% of patients revealed a suboptimal reduction (< 30%) of bone ALP levels. After subsequent study drug administration during the extensions, there was no evidence of progressive reduction of bone turnover markers. Furthermore, increased marker levels after treatment discontinuation demonstrates preservation of bone remodelling capacity. CONCLUSIONS: This study showed that zoledronic acid 4 mg once-yearly was well tolerated and effective in reducing biomarkers over 5 years. Detailed analysis of bone marker changes, however, suggests that this drug regimen causes insufficient reduction of remodelling activity in one third of patients.
Resumo:
The Pacaya volcanic complex is part of the Central American volcanic arc, which is associated with the subduction of the Cocos tectonic plate under the Caribbean plate. Located 30 km south of Guatemala City, Pacaya is situated on the southern rim of the Amatitlan Caldera. It is the largest post-caldera volcano, and has been one of Central America’s most active volcanoes over the last 500 years. Between 400 and 2000 years B.P, the Pacaya volcano had experienced a huge collapse, which resulted in the formation of horseshoe-shaped scarp that is still visible. In the recent years, several smaller collapses have been associated with the activity of the volcano (in 1961 and 2010) affecting its northwestern flanks, which are likely to be induced by the local and regional stress changes. The similar orientation of dry and volcanic fissures and the distribution of new vents would likely explain the reactivation of the pre-existing stress configuration responsible for the old-collapse. This paper presents the first stability analysis of the Pacaya volcanic flank. The inputs for the geological and geotechnical models were defined based on the stratigraphical, lithological, structural data, and material properties obtained from field survey and lab tests. According to the mechanical characteristics, three lithotechnical units were defined: Lava, Lava-Breccia and Breccia-Lava. The Hoek and Brown’s failure criterion was applied for each lithotechnical unit and the rock mass friction angle, apparent cohesion, and strength and deformation characteristics were computed in a specified stress range. Further, the stability of the volcano was evaluated by two-dimensional analysis performed by Limit Equilibrium (LEM, ROCSCIENCE) and Finite Element Method (FEM, PHASE 2 7.0). The stability analysis mainly focused on the modern Pacaya volcano built inside the collapse amphitheatre of “Old Pacaya”. The volcanic instability was assessed based on the variability of safety factor using deterministic, sensitivity, and probabilistic analysis considering the gravitational instability and the effects of external forces such as magma pressure and seismicity as potential triggering mechanisms of lateral collapse. The preliminary results from the analysis provide two insights: first, the least stable sector is on the south-western flank of the volcano; second, the lowest safety factor value suggests that the edifice is stable under gravity alone, and the external triggering mechanism can represent a likely destabilizing factor.
Resumo:
This study will look at the passenger air bag (PAB) performance in a fix vehicle environment using Partial Low Risk Deployment (PLRD) as a strategy. This development will follow test methods against actual baseline vehicle data and Federal Motor Vehicle Safety Standards 208 (FMVSS 208). FMVSS 208 states that PAB compliance in vehicle crash testing can be met using one of three deployment methods. The primary method suppresses PAB deployment, with the use of a seat weight sensor or occupant classification sensor (OCS), for three-year old and six-year old occupants including the presence of a child seat. A second method, PLRD allows deployment on all size occupants suppressing only for the presents of a child seat. A third method is Low Risk Deployment (LRD) which allows PAB deployment in all conditions, all statures including any/all child seats. This study outlines a PLRD development solution for achieving FMVSS 208 performance. The results of this study should provide an option for system implementation including opportunities for system efficiency and other considerations. The objective is to achieve performance levels similar too or incrementally better than the baseline vehicles National Crash Assessment Program (NCAP) Star rating. In addition, to define systemic flexibility where restraint features can be added or removed while improving occupant performance consistency to the baseline. A certified vehicles’ air bag system will typically remain in production until the vehicle platform is redesigned. The strategy to enable the PLRD hypothesis will be to first match the baseline out of position occupant performance (OOP) for the three and six-year old requirements. Second, improve the 35mph belted 5th percentile female NCAP star rating over the baseline vehicle. Third establish an equivalent FMVSS 208 certification for the 25mph unbelted 50th percentile male. FMVSS 208 high-speed requirement defines the federal minimum crash performance required for meeting frontal vehicle crash-test compliance. The intent of NCAP 5-Star rating is to provide the consumer with information about crash protection, beyond what is required by federal law. In this study, two vehicles segments were used for testing to compare and contrast to their baseline vehicles performance. Case Study 1 (CS1) used a cross over vehicle platform and Case Study 2 (CS2) used a small vehicle segment platform as their baselines. In each case study, the restraints systems were from different restraint supplier manufactures and each case contained that suppliers approach to PLRD. CS1 incorporated a downsized twins shaped bag, a carryover inflator, standard vents, and a strategic positioned bag diffuser to help disperse the flow of gas to improve OOP. The twin shaped bag with two segregated sections (lobes) to enabled high-speed baseline performance correlation on the HYGE Sled. CS2 used an A-Symmetric (square shape) PAB with standard size vents, including a passive vent, to obtain OOP similar to the baseline. The A-Symmetric shape bag also helped to enabled high-speed baseline performance improvements in HYGE Sled testing in CS2. The anticipated CS1 baseline vehicle-pulse-index (VPI) target was in the range of 65-67. However, actual dynamic vehicle (barrier) testing was overshadowed with the highest crash pulse from the previous tested vehicles with a VPI of 71. The result from the 35mph NCAP Barrier test was a solid 4-Star (4.7 Star) respectfully. In CS2, the vehicle HYGE Sled development VPI range, from the baseline was 61-62 respectively. Actual NCAP test produced a chest deflection result of 26mm versus the anticipated baseline target of 12mm. The initial assessment of this condition was thought to be due to the vehicles significant VPI increase to 67. A subsequent root cause investigation confirmed a data integrity issue due to the instrumentation. In an effort to establish a true vehicle test data point a second NCAP test was performed but faced similar instrumentation issues. As a result, the chest deflect hit the target of 12.1mm; however a femur load spike, similar to the baseline, now skewed the results. With noted level of performance improvement in chest deflection, the NCAP star was assessed as directional for 5-Star capable performance. With an actual rating of 3-Star due to instrumentation, using data extrapolation raised the ratings to 5-Star. In both cases, no structural changes were made to the surrogate vehicle and the results in each case matched their perspective baseline vehicle platforms. These results proved the PLRD is viable for further development and production implementation.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .