928 resultados para Shrinkage Estimators
Resumo:
Timing effects of radioimmunotherapy (RIT) combined with external-beam radiotherapy (RT) were assessed in human colon carcinoma xenografts. Initially, dose effects of fractionated RT and RIT were evaluated separately. Then, 30 Gy RT (10 fractions over 12 days) were combined with three weekly i.v. injections of 200 microCi of 131I-labeled anti-carcinoembryonic antigen monoclonal antibodies in four different treatment schedules. RIT was given either prior to, concurrently, immediately after, or 2 weeks after RT administration. The longest regrowth delay (RD) of 105 days was observed in mice treated by concurrent administration of RT and RIT, whereas the RDs of RT and RIT alone were 34 and 20 days, respectively. The three sequential combination treatments produced significantly shorter RDs ranging from 62 to 70 days. The tumor response represented by the minimal volume (MV) also showed that concurrent administration of RT and RIT gave the best result, with a mean MV of 4.5% as compared to MVs from 26 to 53% for the three sequential treatments. The results were confirmed in a second experiment, in which a RT of 40 Gy was combined with an identical RIT as above (three injections of 200 microCi of 131I-labeled monoclonal antibodies). At comparable toxicity levels, the maximum tolerated RT or RIT alone gave shorter RDs and less tumor shrinkage compared to simultaneous RT+ RIT. These results may be useful for designing clinical protocols of combined RIT and RT.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
We analyze the emergence of synchronization in a population of moving integrate-and-fire oscillators. Oscillators, while moving on a plane, interact with their nearest neighbor upon firing time. We discover a nonmonotonic dependence of the synchronization time on the velocity of the agents. Moreover, we find that mechanisms that drive synchronization are different for different dynamical regimes. We report the extreme situation where an interplay between the time scales involved in the dynamical processes completely inhibits the achievement of a coherent state. We also provide estimators for the transitions between the different regimes.
Resumo:
The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.
Resumo:
The goal of the project was to develop a new type of self-consolidating concrete (SCC) for slip-form paving to simplify construction an make smoother pavements. Developing the new SCC involved two phases: a feasibility study (Phase I sponsored by TPF-5[098] and concrete admixtures industry) and an in-depth mix proportioning and performance study and field applications (Phase II). The phase I study demonstrated that the new type of SCC needs to possess not only excellent self-consolidating ability before a pavement slab is extruded, but also sufficient “green” strength (the strength of the concrete in a plastic state) after the extrusion. To meet these performance criteria, the new type of SCC mixtures should not be as fluid as conventional SCC but just flowable enough to be self-consolidating. That is, this new type of SCC should be semi-flowable self-consolidating concrete (SFSCC). In the phase II study, effects of different materials and admixtures on rheology, especially the thixotropy, and green strength of fresh SFSCC have been further investigated. The results indicate that SFSCC can be designed to (1) be workable enough for machine placement, (2) be self-consolidating without segregation, (3) hold its shape after extrusion from a paver, and (4) have performance properties (strength and durability) comparable with current pavement concrete. Due to the combined flowability (for self-consolidation) and shape-holding ability (for slip-forming) requirements, SFSCC demands higher cementitious content than conventional pavement concrete. Generally, high cementitious content is associated with high drying shrinkage potential of the concrete. However, well-proportioned and well-constructed SFSCC in a bike path constructed at Ames, IA, has not shown any shrinkage cracks after approximately 3 years of field service. On the other hand, another SFSCC pavement with different mix proportions and construction conditions showed random cracking. The results from the field SFSCC performance monitoring implied that not only the mix proportioning method but also the construction practice is important for producing durable SFSCC pavements. A carbon footprint, energy consumption, and cost analysis conducted in this study have suggested that SFSCC is economically comparable to conventional pavement concrete in fixed-form paving construction, with the benefit of faster, quieter, and easier construction.
Resumo:
The purpose of this study was to investigate the effect of cement paste quality on the concrete performance, particularly fresh properties, by changing the water-to-cementitious materials ratio (w/cm), type and dosage of supplementary cementitious materials (SCM), and airvoid system in binary and ternary mixtures. In this experimental program, a total matrix of 54 mixtures with w/cm of 0.40 and 0.45; target air content of 2%, 4%, and 8%; a fixed cementitious content of 600 pounds per cubic yard (pcy), and the incorporation of three types of SCMs at different dosages was prepared. The fine aggregate-to- total aggregate ratio was fixed at 0.42. Workability, rheology, air-void system, setting time, strength, Wenner Probe surface resistivity, and shrinkage were determined. The effects of paste variables on workability are more marked at the higher w/cm. The compressive strength is strongly influenced by the paste quality, dominated by w/cm and air content. Surface resistivity is improved by inclusion of Class F fly ash and slag cement, especially at later ages. Ternary mixtures performed in accordance with their ingredients. The data collected will be used to develop models that will be part of an innovative mix proportioning procedure.
Resumo:
Supplementary cementitious materials (SCM) have become common parts of modern concrete practice. The blending of two or three cementitious materials to optimize durability, strength, or economics provides owners, engineers, materials suppliers, and contractors with substantial advantages over mixtures containing only portland cement. However, these advances in concrete technology and engineering have not always been adequately captured in specifications for concrete. Users need specific guidance to assist them in defining the performance requirements for a concrete application and the selection of optimal proportions of the cementitious materials needed to produce the required durable concrete. The fact that blended cements are currently available in many regions increases options for mixtures and thus can complicate the selection process. Both Portland and blended cements have already been optimized by the manufacturer to provide specific properties (such as setting time, shrinkage, and strength gain). The addition of SCMs (as binary, ternary, or even more complex mixtures) can alter these properties, and therefore has the potential to impact the overall performance and applications of concrete. This report is the final of a series of publications describing a project aimed at addressing effective use of ternary systems. The work was conducted in several stages and individual reports have been published at the end of each stage.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
Ethmoidal regions weer prepared and dissected to demonstrate regional sinus anatomy and endoscopic surgery approaches from six human heads. After perparation, the specimens were plastinated using the standard S10 technique. A CT-scan of each ethmoidal block was performed before and after preparation of the block to access shrinkage. The plastinated specimens were successfully introduced into clinical teaching of sinus anatomy and surgery. One advantage of using these specimens is their long-lasting preservation without deterioration of the tissue. The specimens were well suited for comparative radiographic and ondoscopic studies, and the CT-scans allowed an exact measurement of tissue shrinkage due to plastination. Increaseed tissue rigidity and shrionkage due to plastination has to be taken into account for subsequent endoscopic observation.
Resumo:
In a recent paper, Komaki studied the second-order asymptotic properties of predictive distributions, using the Kullback-Leibler divergence as a loss function. He showed that estimative distributions with asymptotically efficient estimators can be improved by predictive distributions that do not belong to the model. The model is assumed to be a multidimensional curved exponential family. In this paper we generalize the result assuming as a loss function any f divergence. A relationship arises between alpha connections and optimal predictive distributions. In particular, using an alpha divergence to measure the goodness of a predictive distribution, the optimal shift of the estimate distribution is related to alpha-covariant derivatives. The expression that we obtain for the asymptotic risk is also useful to study the higher-order asymptotic properties of an estimator, in the mentioned class of loss functions.
Resumo:
This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treatments, obtained from the analysis with recovery of interblock information, having as mean square of the error the mean effective variance of this same analysis with recovery of inter-block information. For the four alternatives of analysis, the estimators and estimates were obtained for the variance components and heritability coefficients. The classification of material was also studied. The present study suggests that for each experiment and depending of the objectives of the analysis, one should observe which alternative of analysis is preferable, mainly in cases where a negative estimate is obtained for the variance component due to effects of blocks within adjusted repetitions.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series prior to assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and illustrated. A simulation study is carried out to explore the bias and precision of the estimators and compare them to an analytical procedure matching the data simulation model. The experimental conditions include two data generation models, several degrees of serial dependence, trend, level and/or slope change. The results suggest that the level and slope change estimates provided by the procedure are unbiased for all levels of serial dependence tested and trend is effectively controlled for. The efficiency of the slope change estimator is acceptable, whereas the variance of the level change estimator may be problematic for highly negatively autocorrelated data series.
Resumo:
The large volume of traffic on the interstate system makes it difficult to make pavement repairs. The maintenance crew needs 4-5 hours to break out the concrete to be replaced and prepare the hole for placing new concrete. Because of this it is usually noon before the patch can be placed. Since it is desirable to remove the barricades before dark there are only 7-8 hours for the concrete to reach the required strength. There exists a need for a concrete that can reach the necessary strength (modulus of rupture = 500 psi) in 7-8 hours. The purpose of this study is to determine if type III cement and/or an accelerator can be used in an M-4 mix to yield a fast setting patch with very little shrinkage. It is recognized that calcium chloride is a corrosive material and may therefore have detrimental effects upon the reinforcing steel. The study of these effects, however, is beyond the scope of this investigation.
Resumo:
Conventional concrete is typically cured using external methods. External curing prevents drying of the surface, allows the mixture to stay warm and moist, and results in continued cement hydration (Taylor 2014). Internal curing is a relatively recent technique that has been developed to prolong cement hydration by providing internal water reservoirs in a concrete mixture that do not adversely affect the concrete mixture’s fresh or hardened physical properties. Internal curing grew out of the need for more durable structural concretes that were resistant to shrinkage cracking. Joint spacing for concrete overlays can be increased if slab warping is reduced or eliminated. One of the most promising potential benefits from using internal curing for concrete overlays, then, is the reduced number of joints due to increased joint spacing (Wei and Hansen 2008).