948 resultados para best estimate method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past 7 years, the enediyne anticancer antibiotics have been widely studied due to their DNA cleaving ability. The focus of these antibiotics, represented by kedarcidin chromophore, neocarzinostatin chromophore, calicheamicin, esperamicin A, and dynemicin A, is on the enediyne moiety contained within each of these antibiotics. In its inactive form, the moiety is benign to its environment. Upon suitable activation, the system undergoes a Bergman cycloaromatization proceeding through a 1,4-dehydrobenzene diradical intermediate. It is this diradical intermediate that is thought to cleave double-stranded dna through hydrogen atom abstraction. Semiempirical, semiempiricalci, Hartree–Fock ab initio, and mp2 electron correlation methods have been used to investigate the inactive hex-3-ene-1,5-diyne reactant, the 1,4-dehydrobenzene diradical, and a transition state structure of the Bergman reaction. Geometries calculated with different basis sets and by semiempirical methods have been used for single-point calculations using electron correlation methods. These results are compared with the best experimental and theoretical results reported in the literature. Implications of these results for computational studies of the enediyne anticancer antibiotics are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Postmortem imaging has gained prominence in the field of forensic pathology. Even with experience in this procedure, difficulties arise in evaluating pathologies of the postmortem lung. The effect of postmortem ventilation with applied pressures of 10, 20, 30 and 40mbar was evaluated in 10 corpses using simultaneous postmortem computed tomography (pmCT) scans. Ventilation was performed via a continuous positive airway pressure mask (n=5), an endotracheal tube (n=4) and a laryngeal mask (n=1) using a portable home care ventilator. The lung volumes were measured and evaluated by a segmentation technique based on reconstructed CT data. The resulting changes to the lungs were analyzed. Postmortem ventilation at 40mbar induced a significant (p<0.05) unfolding of the lungs, with a mean volume increase of 1.32l. Small pathologies of the lung such as scarring and pulmonary nodules as well as emphysema were revealed, while inner livores were reduced. Even though lower ventilation pressures resulted in a significant (p<0.05) volume increase, pathologies were best evaluated when a pressure of 40mbar was applied, due to the greater reduction of the inner livores. With the ventilation-induced expansion of the lungs, a decrease in the heart diameter and gaseous distension of the stomach was recognized. In conclusion, postmortem ventilation is a feasible method for improving evaluation of the lungs and detection of small lung pathologies. This is because of the volume increase in the air-filled portions of the lung and reduced appearance of inner livores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to estimate the potential of method restriction as a public health strategy in suicide prevention. Data from the Swiss Federal Statistical Office and the Swiss Institutes of Forensic Medicine from 2004 were gathered and categorized into suicide submethods according to accessibility to restriction of means. Of suicides in Switzerland, 39.2% are accessible to method restriction. The highest proportions were found in private weapons (13.2%), army weapons (10.4%), and jumps from hot-spots (4.6%). The presented method permits the estimation of the suicide prevention potential of a country by method restriction and the comparison of restriction potentials between suicide methods. In Switzerland, reduction of firearm suicides has the highest potential to reduce the total number of suicides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare four different implantation modalities for the repair of superficial osteochondral defects in a caprine model using autologous, scaffold-free, engineered cartilage constructs, and to describe the short-term outcome of successfully implanted constructs. METHODS: Scaffold-free, autologous cartilage constructs were implanted within superficial osteochondral defects created in the stifle joints of nine adult goats. The implants were distributed between four 6-mm-diameter superficial osteochondral defects created in the trochlea femoris and secured in the defect using a covering periosteal flap (PF) alone or in combination with adhesives (platelet-rich plasma (PRP) or fibrin), or using PRP alone. Eight weeks after implantation surgery, the animals were killed. The defect sites were excised and subjected to macroscopic and histopathologic analyses. RESULTS: At 8 weeks, implants that had been held in place exclusively with a PF were well integrated both laterally and basally. The repair tissue manifested an architecture similar to that of hyaline articular cartilage. However, most of the implants that had been glued in place in the absence of a PF were lost during the initial 4-week phase of restricted joint movement. The use of human fibrin glue (FG) led to massive cell infiltration of the subchondral bone. CONCLUSIONS: The implantation of autologous, scaffold-free, engineered cartilage constructs might best be performed beneath a PF without the use of tissue adhesives. Successfully implanted constructs showed hyaline-like characteristics in adult goats within 2 months. Long-term animal studies and pilot clinical trials are now needed to evaluate the efficacy of this treatment strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When comparing a new treatment with a control in a randomized clinical study, the treatment effect is generally assessed by evaluating a summary measure over a specific study population. The success of the trial heavily depends on the choice of such a population. In this paper, we show a systematic, effective way to identify a promising population, for which the new treatment is expected to have a desired benefit, using the data from a current study involving similar comparator treatments. Specifically, with the existing data we first create a parametric scoring system using multiple covariates to estimate subject-specific treatment differences. Using this system, we specify a desired level of treatment difference and create a subgroup of patients, defined as those whose estimated scores exceed this threshold. An empirically calibrated group-specific treatment difference curve across a range of threshold values is constructed. The population of patients with any desired level of treatment benefit can then be identified accordingly. To avoid any ``self-serving'' bias, we utilize a cross-training-evaluation method for implementing the above two-step procedure. Lastly, we show how to select the best scoring system among all competing models. The proposals are illustrated with the data from two clinical trials in treating AIDS and cardiovascular diseases. Note that if we are not interested in designing a new study for comparing similar treatments, the new procedure can also be quite useful for the management of future patients who would receive nontrivial benefits to compensate for the risk or cost of the new treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The molecular interactions between the host molecule, perthiolated beta-cyclodextrin (CD), and the guest molecules, adamantaneacetic acid (AD) and ferroceneacetic acid (FC), have been inestigated theoretically in both the gas and aqueous phases. The major computations have been carried out at the theoretical levels, RHF/6-31G and B3LYP/6- 31G. MP2 electronic energies were also computed based at the geometries optimized by both the RHF and B3LYP methods in the gas phase to establish a better estimate of the correlation effect. The solvent phase computations were completed at the RHF/6-31G and B3LYP/6-31G levels using the PCM model. The most stable structures optimized in gas phase by both the RHF and B3LYP methods were used for the computations in solution. A method to systematically manipulate the relative position and orientation between the interacting molecules is proposed. In the gas phase, six trials with different host-guest relative positions and orientations were completed successfully with the B3LYP method for both the CD-AD and CD-FC complexes. Only four trials were completed with RHF method. In the gas phase, the best results from the RHF method gives for the association Gibbs free energy (ΔG°) values equal to -32.21kj/mol for CD-AD and -25.73kj/mol for CD-FC. And the best results from the B3LYP method have ΔG° equal to -47.57kj/mol for CD-AD and -41.09kj/mol for CD-FC. The MP2 correction significantly lowers ΔG° based on the geometries from both methods. For the RHF structure, the MP2 computations lowered ΔG° to -60.64kj/mol for CD-AD and -54.10 for CD-FC. For the structure from the B3LYP method, it was reduced to -59.87 kj/mol for CD-AD and -54.84 kj/mol for CDFC. The RHF solvent phase calculations yielded following results: ΔG°(aq) equals 107.2kj/mol for CD-AD and 111.4kj/mol for CD-FC. Compared with the results from the RHF method, the B3LYP method provided clearly better solvent phase results with ΔG° (aq) equal to 38.64kj/mol for CD-AD and 39.61kj/mol for CD-FC. These results qualitatively explain the experimental observations. However quantitatively they are in poor agreement with the experimental values available in the literature and those recently published by Liu et al. And the reason is believed to be omission of hydrophobic contribution to the association. Determining the global geometrical minima for these very large systems was very difficult and computationally time consuming, but after a very thorough search, these were identified. A relevant result of this search is that when the complexes, CD-AD and CD-FC, are formed, the AD and FC molecules are only partially embedded inside the CD cavity. The totally embedded complexes were found to have significantly higher energies. The semiempirical method, ZINDO, was employed to investigate the effect of complexation on the first electronic excitation of CD anchored to a metal nano-particle. The computational results revealed that after complexation to FC, the transition intensity declines to about 25% of the original value, and after complexation with AD, the intensity drops almost 50%. The tighter binding and transition intensity of CD-AD qualitatively agrees with the experimental result that the addition of AD to a solution of CD and FC restores the fluorescence of CD that was quenched by the addition of FC. A method to evaluate the “hydrophobic force” effect is proposed for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research was conducted in August of 2011 in the villages of Kigisu and Rubona in rural Uganda while the author was serving as a community health volunteer with the U.S. Peace Corps. The study used the contingent valuation method (CVM) to estimate the populations’ willingness to pay (WTP) for the operation and maintenance of an improved water source. The survey was administered to 122 households out of 400 in the community, gathering demographic information, health and water behaviors, and using an iterative bidding process to estimate WTP. Households indicated a mean WTP of 286 Ugandan Shillings (UGX) per 20 liters for a public tap and 202 UGX per 20 liters from a private tap. The data were also analyzed using an ordered probit model. It was determined that the number of children in the home, and the distance from the existing source were the primary variables influencing households’ WTP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study develops an automated analysis tool by combining total internal reflection fluorescence microscopy (TIRFM), an evanescent wave microscopic imaging technique to capture time-sequential images and the corresponding image processing Matlab code to identify movements of single individual particles. The developed code will enable us to examine two dimensional hindered tangential Brownian motion of nanoparticles with a sub-pixel resolution (nanoscale). The measured mean square displacements of nanoparticles are compared with theoretical predictions to estimate particle diameters and fluid viscosity using a nonlinear regression technique. These estimated values will be confirmed by the diameters and viscosities given by manufacturers to validate this analysis tool. Nano-particles used in these experiments are yellow-green polystyrene fluorescent nanospheres (200 nm, 500 nm and 1000 nm in diameter (nominal); 505 nm excitation and 515 nm emission wavelengths). Solutions used in this experiment are de-ionized (DI) water, 10% d-glucose and 10% glycerol. Mean square displacements obtained near the surface shows significant deviation from theoretical predictions which are attributed to DLVO forces in the region but it conforms to theoretical predictions after ~125 nm onwards. The proposed automation analysis tool will be powerfully employed in the bio-application fields needed for examination of single protein (DNA and/or vesicle) tracking, drug delivery, and cyto-toxicity unlike the traditional measurement techniques that require fixing the cells. Furthermore, this tool can be also usefully applied for the microfluidic areas of non-invasive thermometry, particle tracking velocimetry (PTV), and non-invasive viscometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data of the strength of Earth’s magnetic field (paleointensity) in the geological past are crucial for understanding the geodynamo. Conventional paleointensity determination methods require heating a sample to a high temperature in one or more steps. Consequently, many rocks are unsuitable for these methods due to a heating-induced experimental alteration. Alternative non-heating paleointensity methods are investigated to assess their effectiveness and reliability using both natural samples from Lemptégy Volcano, France, and synthetic samples. Paleointensity was measured from the natural and synthetic samples using the Pseudo-Thellier, ARM, REM, REMc, REM’, and Preisach methods. For the natural samples, only the Pseudo-Thellier method was able to produce a reasonable paleointensity estimate consistent with previous paleointensity data. The synthetic samples yielded more successful estimates using all the methods, with the Pseudo-Thellier and ARM methods producing the most accurate results. The Pseudo-Thellier method appears to be the best alternative to the heating-based paleointensity methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper treats the problem of setting the inventory level and optimizing the buffer allocation of closed-loop flow lines operating under the constant-work-in-process (CONWIP) protocol. We solve a very large but simple linear program that models an entire simulation run of a closed-loop flow line in discrete time to determine a production rate estimate of the system. This approach introduced in Helber, Schimmelpfeng, Stolletz, and Lagershausen (2011) for open flow lines with limited buffer capacities is extended to closed-loop CONWIP flow lines. Via this method, both the CONWIP level and the buffer allocation can be optimized simultaneously. The first part of a numerical study deals with the accuracy of the method. In the second part, we focus on the relationship between the CONWIP inventory level and the short-term profit. The accuracy of the method turns out to be best for such configurations that maximize production rate and/or short-term profit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calcium levels in spines play a significant role in determining the sign and magnitude of synaptic plasticity. The magnitude of calcium influx into spines is highly dependent on influx through N-methyl D-aspartate (NMDA) receptors, and therefore depends on the number of postsynaptic NMDA receptors in each spine. We have calculated previously how the number of postsynaptic NMDA receptors determines the mean and variance of calcium transients in the postsynaptic density, and how this alters the shape of plasticity curves. However, the number of postsynaptic NMDA receptors in the postsynaptic density is not well known. Anatomical methods for estimating the number of NMDA receptors produce estimates that are very different than those produced by physiological techniques. The physiological techniques are based on the statistics of synaptic transmission and it is difficult to experimentally estimate their precision. In this paper we use stochastic simulations in order to test the validity of a physiological estimation technique based on failure analysis. We find that the method is likely to underestimate the number of postsynaptic NMDA receptors, explain the source of the error, and re-derive a more precise estimation technique. We also show that the original failure analysis as well as our improved formulas are not robust to small estimation errors in key parameters.