931 resultados para Parameter Rapresentation of Harmonic
Resumo:
In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.
Resumo:
In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.
Resumo:
AIM: To confirm the accuracy of sentinel node biopsy (SNB) procedure and its morbidity, and to investigate predictive factors for SN status and prognostic factors for disease-free survival (DFS) and disease-specific survival (DSS). MATERIALS AND METHODS: Between October 1997 and December 2004, 327 consecutive patients in one centre with clinically node-negative primary skin melanoma underwent an SNB by the triple technique, i.e. lymphoscintigraphy, blue-dye and gamma-probe. Multivariate logistic regression analyses as well as the Kaplan-Meier were performed. RESULTS: Twenty-three percent of the patients had at least one metastatic SN, which was significantly associated with Breslow thickness (p<0.001). The success rate of SNB was 99.1% and its morbidity was 7.6%. With a median follow-up of 33 months, the 5-year DFS/DSS were 43%/49% for patients with positive SN and 83.5%/87.4% for patients with negative SN, respectively. The false-negative rate of SNB was 8.6% and sensitivity 91.4%. On multivariate analysis, DFS was significantly worsened by Breslow thickness (RR=5.6, p<0.001), positive SN (RR=5.0, p<0.001) and male sex (RR=2.9, p=0.001). The presence of a metastatic SN (RR=8.4, p<0.001), male sex (RR=6.1, p<0.001), Breslow thickness (RR=3.2, p=0.013) and ulceration (RR=2.6, p=0.015) were significantly associated with a poorer DSS. CONCLUSION: SNB is a reliable procedure with high sensitivity (91.4%) and low morbidity. Breslow thickness was the only statistically significant parameter predictive of SN status. DFS was worsened in decreasing order by Breslow thickness, metastatic SN and male gender. Similarly DSS was significantly worsened by a metastatic SN, male gender, Breslow thickness and ulceration. These data reinforce the SN status as a powerful staging procedure
Resumo:
In This Paper Several Additional Gmm Specification Tests Are Studied. a First Test Is a Chow-Type Test for Structural Parameter Stability of Gmm Estimates. the Test Is Inspired by the Fact That \"Taste and Technology\" Parameters Are Uncovered. the Second Set of Specification Tests Are Var Encompassing Tests. It Is Assumed That the Dgp Has a Finite Var Representation. the Moment Restrictions Which Are Suggested by Economic Theory and Exploited in the Gmm Procedure Represent One Possible Characterization of the Dgp. the Var Is a Different But Compatible Characterization of the Same Dgp. the Idea of the Var Encompassing Tests Is to Compare Parameter Estimates of the Euler Conditions and Var Representations of the Dgp Obtained Separately with Parameter Estimates of the Euler Conditions and Var Representations Obtained Jointly. There Are Several Ways to Construct Joint Systems Which Are Discussed in the Paper. Several Applications Are Also Discussed.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
We introduce and axiomatize a one-parameter class of individual deprivation measures. Motivated by a suggestion of Runciman, we modify Yitzhaki’s index by multiplying it by a function that is interpreted as measuring the part of deprivation generated by an agent’s observation that others in its reference group move on to a higher level of income than itself. The parameter reflects the relative weight given to these dynamic considerations, and the standard Yitzhaki index is obtained as a special case. In addition, we characterize more general classes of measures that pay attention to this important dynamic aspect of deprivation.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
This report examines how to estimate the parameters of a chaotic system given noisy observations of the state behavior of the system. Investigating parameter estimation for chaotic systems is interesting because of possible applications for high-precision measurement and for use in other signal processing, communication, and control applications involving chaotic systems. In this report, we examine theoretical issues regarding parameter estimation in chaotic systems and develop an efficient algorithm to perform parameter estimation. We discover two properties that are helpful for performing parameter estimation on non-structurally stable systems. First, it turns out that most data in a time series of state observations contribute very little information about the underlying parameters of a system, while a few sections of data may be extraordinarily sensitive to parameter changes. Second, for one-parameter families of systems, we demonstrate that there is often a preferred direction in parameter space governing how easily trajectories of one system can "shadow'" trajectories of nearby systems. This asymmetry of shadowing behavior in parameter space is proved for certain families of maps of the interval. Numerical evidence indicates that similar results may be true for a wide variety of other systems. Using the two properties cited above, we devise an algorithm for performing parameter estimation. Standard parameter estimation techniques such as the extended Kalman filter perform poorly on chaotic systems because of divergence problems. The proposed algorithm achieves accuracies several orders of magnitude better than the Kalman filter and has good convergence properties for large data sets.
Resumo:
This paper develops a simple model to investigate how resource-driven economic booms shape the equilibrium political institutions of resource-rich societies and influence the likelihood of experiencing civil war. In our model a strong government apparatus favors property rights protection but also makes the state more powerful and hence may induce predatory autocratic regimes over democracy. We characterize the parameter space of each political outcome in terms of the type of the available natural resources. Economic booms based on resources that are privately exploited empower the citizens and tend to ease democratic transitions. In contrast, booms based on resources exploited by the state tend to favor more dictatorial regimes. Finally, economic booms based on resources that can be exploited either by the state or by private citizens incite preemptive actions by both parties that may result in civil war. We discuss the predictions of the model using historical and contemporary examples.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
The conceptual and parameter uncertainty of the semi-distributed INCA-N (Integrated Nutrients in Catchments-Nitrogen) model was studied using the GLUE (Generalized Likelihood Uncertainty Estimation) methodology combined with quantitative experimental knowledge, the concept known as 'soft data'. Cumulative inorganic N leaching, annual plant N uptake and annual mineralization proved to be useful soft data to constrain the parameter space. The INCA-N model was able to simulate the seasonal and inter-annual variations in the stream-water nitrate concentrations, although the lowest concentrations during the growing season were not reproduced. This suggested that there were some retention processes or losses either in peatland/wetland areas or in the river which were not included in the INCA-N model. The results of the study suggested that soft data was a way to reduce parameter equifinality, and that the calibration and testing of distributed hydrological and nutrient leaching models should be based both on runoff and/or nutrient concentration data and the qualitative knowledge of experimentalist. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Mostly because of a lack of observations, fundamental aspects of the St. Lawrence Estuary's wintertime response to forcing remain poorly understood. The results of a field campaign over the winter of 2002/03 in the estuary are presented. The response of the system to tidal forcing is assessed through the use of harmonic analyses of temperature, salinity, sea level, and current observations. The analyses confirm previous evidence for the presence of semidiurnal internal tides, albeit at greater depths than previously observed for ice-free months. The low-frequency tidal streams were found to be mostly baroclinic in character and to produce an important neap tide intensification of the estuarine circulation. Despite stronger atmospheric momentum forcing in winter, the response is found to be less coherent with the winds than seen in previous studies of ice-free months. The tidal residuals show the cold intermediate layer in the estuary is renewed rapidly ( 14 days) in late March by the advection of a wedge of near-freezing waters from the Gulf of St. Lawrence. In situ processes appeared to play a lesser role in the renewal of this layer. In particular, significant wintertime deepening of the estuarine surface mixed layer was prevented by surface stability, which remained high throughout the winter. The observations also suggest that the bottom circulation was intensified during winter, with the intrusion in the deep layer of relatively warm Atlantic waters, such that the 3 C isotherm rose from below 150 m to near 60 m.
Resumo:
The theory of harmonic force constant refinement calculations is reviewed, and a general-purpose program for force constant and normal coordinate calculations is described. The program, called ASYM20. is available through Quantum Chemistry Program Exchange. It will work on molecules of any symmetry containing up to 20 atoms and will produce results on a series of isotopomers as desired. The vibrational secular equations are solved in either nonredundant valence internal coordinates or symmetry coordinates. As well as calculating the (harmonic) vibrational wavenumbers and normal coordinates, the program will calculate centrifugal distortion constants, Coriolis zeta constants, harmonic contributions to the α′s. root-mean-square amplitudes of vibration, and other quantities related to gas electron-diffraction studies and thermodynamic properties. The program will work in either a predict mode, in which it calculates results from an input force field, or in a refine mode, in which it refines an input force field by least squares to fit observed data on the quantities mentioned above. Predicate values of the force constants may be included in the data set for a least-squares refinement. The program is written in FORTRAN for use on a PC or a mainframe computer. Operation is mainly controlled by steering indices in the input data file, but some interactive control is also implemented.
Resumo:
Increased atmospheric concentrations of carbon dioxide (CO2) will benefit the yield of most crops. Two free air CO2 enrichment (FACE) meta-analyses have shown increases in yield of between 0 and 73% for C3 crops. Despite this large range, few crop modelling studies quantify the uncertainty inherent in the parameterisation of crop growth and development. We present a novel perturbed-parameter method of crop model simulation, which uses some constraints from observations, that does this. The model used is the groundnut (i.e. peanut; Arachis hypogaea L.) version of the general large-area model for annual crops (GLAM). The conclusions are of relevance to C3 crops in general. The increases in yield simulated by GLAM for doubled CO2 were between 16 and 62%. The difference in mean percentage increase between well-watered and water-stressed simulations was 6.8. These results were compared to FACE and controlled environment studies, and to sensitivity tests on two other crop models of differing levels of complexity: CROPGRO, and the groundnut model of Hammer et al. [Hammer, G.L., Sinclair, T.R., Boote, K.J., Wright, G.C., Meinke, H., Bell, M.J., 1995. A peanut simulation model. I. Model development and testing. Agron. J. 87, 1085-1093]. The relationship between CO2 and water stress in the experiments and in the models was examined. From a physiological perspective, water-stressed crops are expected to show greater CO2 stimulation than well-watered crops. This expectation has been cited in literature. However, this result is not seen consistently in either the FACE studies or in the crop models. In contrast, leaf-level models of assimilation do consistently show this result. An analysis of the evidence from these models and from the data suggests that scale (canopy versus leaf), model calibration, and model complexity are factors in determining the sign and magnitude of the interaction between CO2 and water stress. We conclude from our study that the statement that 'water-stressed crops show greater CO2 stimulation than well-watered crops' cannot be held to be universally true. We also conclude, preliminarily, that the relationship between water stress and assimilation varies with scale. Accordingly, we provide some suggestions on how studies of a similar nature, using crop models of a range of complexity, could contribute further to understanding the roles of model calibration, model complexity and scale. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.