936 resultados para Modified barrier function
Resumo:
We examine the effect of subdividing the potential barrier along the reaction coordinate on Kramers' escape rate for a model potential. Using the known supersymmetric potential approach, we show the existence of an optimal number of subdivisions that maximizes the rate.
Resumo:
We propose an exactly solvable model for the two-state curve-crossing problem. Our model assumes the coupling to be a delta function. It is used to calculate the effect of curve crossing on the electronic absorption spectrum and the resonance Raman excitation profile.
Resumo:
The theoretical optimization of the design parametersN A ,N D andW P has been done for efficient operation of Au-p-n Si solar cell including thermionic field emission, dependence of lifetime and mobility on impurity concentrations, dependence of absorption coefficient on wavelength, variation of barrier height and hence the optimum thickness ofp region with illumination. The optimized design parametersN D =5×1020 m−3,N A =3×1024 m−3 andW P =11.8 nm yield efficiencyη=17.1% (AM0) andη=19.6% (AM1). These are reduced to 14.9% and 17.1% respectively if the metal layer series resistance and transmittance with ZnS antireflection coating are included. A practical value ofW P =97.0 nm gives an efficiency of 12.2% (AM1).
Resumo:
This letter presents a modified version of the grain boundary barrier model for polycrystalline semiconductors which takes into account the carrier transport in the bulk of the grain and the dynamic process of capture and release of free carriers by the grain boundary traps.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
We had earlier proposed a hypothesis to explain the mechanism of perpetuation of immunological memory based on the operation of idiotypic network in the complete absence of antigen. Experimental evidences were provided for memory maintenance through anti-idiotypic antibody (Ab(2)) carrying the internal image of the antigen. In the present work, we describe a structural basis for such memory perpetuation by molecular modeling and structural analysis studies. A three-dimensional model of Ab(2) was generated and the structure of the antigenic site on the hemagglutinin protein H of Rinderpest virus was modeled using the structural template of hemagglutinin protein of Measles virus. Our results show that a large portion of heavy chain containing the CDR regions of Ab(2) resembles the domain of the hemagglutinin housing the epitope regions. The similarity demonstrates that an internal image of the H antigen is formed in Ab(2), which provides a structural basis for functional mimicry demonstrated earlier. This work brings out the importance of the structural similarity between a domain of hemagglutinin protein to that of its corresponding Ab(2). It provides evidence that Ab(2) is indeed capable of functioning as surrogate antigen and provides support to earlier proposed relay hypothesis which has provided a mechanism for the maintenance of immunological memory.
Resumo:
Lead acid batteries are used in hybrid vehicles and telecommunications power supply. For reliable operation of these systems, an indication of state of charge of battery is essential. To determine the state of charge of battery, current integration method combined with open circuit voltage, is being implemented. To reduce the error in the current integration method the dependence of available capacity as a function of discharge current is determined. The current integration method is modified to incorporate this factor. The experimental setup built to obtain the discharge characterstics of the battery is presented
Resumo:
Robust estimation often relies on a dispersion function that is more slowly varying at large values than the square function. However, the choice of tuning constant in dispersion functions may impact the estimation efficiency to a great extent. For a given family of dispersion functions such as the Huber family, we suggest obtaining the "best" tuning constant from the data so that the asymptotic efficiency is maximized. This data-driven approach can automatically adjust the value of the tuning constant to provide the necessary resistance against outliers. Simulation studies show that substantial efficiency can be gained by this data-dependent approach compared with the traditional approach in which the tuning constant is fixed. We briefly illustrate the proposed method using two datasets.
Resumo:
The approach of generalized estimating equations (GEE) is based on the framework of generalized linear models but allows for specification of a working matrix for modeling within-subject correlations. The variance is often assumed to be a known function of the mean. This article investigates the impacts of misspecifying the variance function on estimators of the mean parameters for quantitative responses. Our numerical studies indicate that (1) correct specification of the variance function can improve the estimation efficiency even if the correlation structure is misspecified; (2) misspecification of the variance function impacts much more on estimators for within-cluster covariates than for cluster-level covariates; and (3) if the variance function is misspecified, correct choice of the correlation structure may not necessarily improve estimation efficiency. We illustrate impacts of different variance functions using a real data set from cow growth.
Resumo:
Prostate cancer is a leading contributor to male cancer-related deaths worldwide. Kallikrein-related peptidases (KLKs) are serine proteases that exhibit deregulated expression in prostate cancer, with KLK3, or prostate specific antigen (PSA), being the widely-employed clinical biomarker for prostate cancer. Other KLKs, such as KLK2, show promise as prostate cancer biomarkers and, additionally, their altered expression has been utilised for the design of KLK-targeted therapies. There is also a large body of in vitro and in vivo evidence supporting their role in cancer-related processes. Here, we review the literature on studies to date investigating the potential of other KLKs, in addition to PSA, as biomarkers and in therapeutic options, as well as their current known functional roles in cancer progression. Increased knowledge of these KLK-mediated functions, including degradation of the extracellular matrix, local invasion, cancer cell proliferation, interactions with fibroblasts, angiogenesis, migration, bone metastasis and tumour growth in vivo, may help define new roles as prognostic biomarkers and novel therapeutic targets for this cancer.
Resumo:
Background Sensorimotor function is degraded in patients after lower limb arthroplasty. Sensorimotor training is thought to improve sensorimotor skills, however, the optimal training stimulus with regard to volume, frequency, duration, and intensity is still unknown. The aim of this study, therefore, was to firstly quantify the progression of sensorimotor function after total hip (THA) or knee (TKA) arthroplasty and, as second step, to evaluate effects of different sensorimotor training volumes. Methods 58 in-patients during their rehabilitation after THA or TKA participated in this prospective cohort study. Sensorimotor function was assessed using a test battery including measures of stabilization capacity, static balance, proprioception, and gait, along with a self-reported pain and function. All participants were randomly assigned to one of three intervention groups performing sensorimotor training two, four, or six times per week. Outcome measures were taken at three instances, at baseline (pre), after 1.5 weeks (mid) and at the conclusion of the 3 week program (post). Results All measurements showed significant improvements over time, with the exception of proprioception and static balance during quiet bipedal stance which showed no significant main effects for time or intervention. There was no significant effect of sensorimotor training volume on any of the outcome measures. Conclusion We were able to quantify improvements in measures of dynamic, but not static, sensorimotor function during the initial three weeks of rehabilitation following TKA/THA. Although sensorimotor improvements were independent of the training volume applied in the current study, long-term effects of sensorimotor training volume need to be investigated to optimize training stimulus recommendations.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
Recent decreases in costs, and improvements in performance, of silicon array detectors open a range of potential applications of relevance to plant physiologists, associated with spectral analysis in the visible and short-wave near infra-red (far-red) spectrum. The performance characteristics of three commercially available ‘miniature’ spectrometers based on silicon array detectors operating in the 650–1050-nm spectral region (MMS1 from Zeiss, S2000 from Ocean Optics, and FICS from Oriel, operated with a Larry detector) were compared with respect to the application of non-invasive prediction of sugar content of fruit using near infra-red spectroscopy (NIRS). The FICS–Larry gave the best wavelength resolution; however, the narrow slit and small pixel size of the charge-coupled device detector resulted in a very low sensitivity, and this instrumentation was not considered further. Wavelength resolution was poor with the MMS1 relative to the S2000 (e.g. full width at half maximum of the 912 nm Hg peak, 13 and 2 nm for the MMS1 and S2000, respectively), but the large pixel height of the array used in the MMS1 gave it sensitivity comparable to the S2000. The signal-to-signal standard error ratio of spectra was greater by an order of magnitude with the MMS1, relative to the S2000, at both near saturation and low light levels. Calibrations were developed using reflectance spectra of filter paper soaked in range of concentrations (0–20% w/v) of sucrose, using a modified partial least squares procedure. Calibrations developed with the MMS1 were superior to those developed using the S2000 (e.g. coefficient of correlation of 0.90 and 0.62, and standard error of cross-validation of 1.9 and 5.4%, respectively), indicating the importance of high signal to noise ratio over wavelength resolution to calibration accuracy. The design of a bench top assembly using the MMS1 for the non-invasive assessment of mesocarp sugar content of (intact) melon fruit is reported in terms of light source and angle between detector and light source, and optimisation of math treatment (derivative condition and smoothing function).
Resumo:
A tethered remote instrument package (TRIP) has been developed for biological surveys over Queensland's continental shelf and slope. The present system, evolved from an earlier sled configuration, is suspended above the sea bed and towed at low speeds. Survey information is collected through video and film cameras while instrument and environmental variables are handled by a minicomputer. The operator was able to "fly" the instrument package above the substrate by using an altitude echosounder, forward-looking sonar and real-time television viewing. Unwanted movements of the viewing system were stabilized through a gyro-controlled camera-head panning system. the hydrodynamic drag of the umbilical presented a major control problem which could be overcome only by a reduction in towing speed. Despite the constraints of towing a device such as this through the coral reef environment, the package performed well during a recent biological survey where it was worked at 50% of its 350 m design depth.