889 resultados para Rectified bias
Resumo:
The tip of a scanning tunneling microscope (STM) can be used to dehydrogenate freely-diffusing tetrathienoanthracene (TTA) molecules on Cu(111), trapping the molecules into metal-coordinated oligomeric structures. The process proceeds at bias voltages above ∼3 V and produces organometallic structures identical to those resulting from the thermally-activated cross-coupling of a halogenated analogue. The process appears to be substrate dependent: no oligomerization was observed on Ag(111) or HOPG. This approach demonstrates the possibility of controlled synthesis and nanoscale patterning of 2D oligomer structures on selected surfaces.
Resumo:
A fuzzy waste-load allocation model, FWLAM, is developed for water quality management of a river system using fuzzy multiple-objective optimization. An important feature of this model is its capability to incorporate the aspirations and conflicting objectives of the pollution control agency and dischargers. The vagueness associated with specifying the water quality criteria and fraction removal levels is modeled in a fuzzy framework. The goals related to the pollution control agency and dischargers are expressed as fuzzy sets. The membership functions of these fuzzy sets are considered to represent the variation of satisfaction levels of the pollution control agency and dischargers in attaining their respective goals. Two formulations—namely, the MAX-MIN and MAX-BIAS formulations—are proposed for FWLAM. The MAX-MIN formulation maximizes the minimum satisfaction level in the system. The MAX-BIAS formulation maximizes a bias measure, giving a solution that favors the dischargers. Maximization of the bias measure attempts to keep the satisfaction levels of the dischargers away from the minimum satisfaction level and that of the pollution control agency close to the minimum satisfaction level. Most of the conventional water quality management models use waste treatment cost curves that are uncertain and nonlinear. Unlike such models, FWLAM avoids the use of cost curves. Further, the model provides the flexibility for the pollution control agency and dischargers to specify their aspirations independently.
Resumo:
Manganitelike double perovskite Sr2TiMnO6 (STMO) ceramics fabricated from the powders synthesized via the solid-state reaction route, exhibited dielectric constants as high as similar to 10(5) in the low frequency range (100 Hz-10 kHz) at room temperature. The Maxwell-Wagner type of relaxation mechanism was found to be more appropriate to rationalize such high dielectric constant values akin to that observed in materials such as KxTiyNi(1-x-y)O and CaCu3Ti4O12. The dielectric measurements carried out on the samples with different thicknesses and electrode materials reflected the influence of extrinsic effects. The impedance studies (100 Hz-10 MHz) in the 180-300 K temperature range revealed the presence of two dielectric relaxations corresponding to the grain boundary and the electrode. The dielectric response of the grain boundary was found to be weakly dependent on the dc bias field (up to 11 V/cm). However, owing to the electrode polarization, the applied ac/dc field had significant effect on the low frequency dielectric response. At low temperatures (100-180 K), the dc conductivity of STMO followed a variable range hopping behavior. Above 180 K, it followed the Arrhenius behavior because of the thermally activated conduction process. The bulk conductivity relaxation owing to the localized hopping of charge carriers obeyed the typical universal dielectric response.
Resumo:
Low-temperature electroluminescence (EL) is observed in n-type modulation-doped AlGaAs/InGaAs/GaAs quantum well samples by applying a positive voltage between the semitransparent Au gate and alloyed Au–Ge Ohmic contacts made on the top surface of the samples. We attribute impact ionization in the InGaAs QW to the observed EL from the samples. A redshift in the EL spectra is observed with increasing gate bias. The observed redshift in the EL spectra is attributed to the band gap renormalization due to many-body effects and quantum-confined Stark effect.
Resumo:
Background The potential effect of ginger on platelet aggregation is a widely-cited concern both within the published literature and to clinicians; however, there has been no systematic appraisal of the evidence to date. Methods Using the PRISMA guidelines, we systematically reviewed the results of clinical and observational trials regarding the effect of ginger on platelet aggregation in adults compared to either placebo or baseline data. Studies included in this review stipulated the independent variable was a ginger preparation or isolated ginger compound, and used measures of platelet aggregation as the primary outcome. Results Ten studies were included, comprising eight clinical trials and two observational studies. Of the eight clinical trials, four reported that ginger reduced platelet aggregation, while the remaining four reported no effect. The two observational studies also reported mixed findings. Discussion Many of the studies appraised for this review had moderate risks of bias. Methodology varied considerably between studies, notably the timeframe studied, dose of ginger used, and the characteristics of subjects recruited (e.g. healthy vs. patients with chronic diseases). Conclusion The evidence that ginger affects platelet aggregation and coagulation is equivocal and further study is needed to definitively address this question.
Resumo:
Polymerized carbon nanotubes (CNTs) are promising materials for polymer-based electronics and electro-mechanical sensors. The advantage of having a polymer nanolayer on CNTs widens the scope for functionalizing it in various ways for polymer electronic devices. However, in this paper, we show for the first time experimentally that, due to a resistive polymer layer having carbon nanoparticle inclusions and polymerized carbon nanotubes, an interesting dynamics can be exploited. We first show analytically that the relative change in the resistance of a single isolated semiconductive nanotube is directly proportional to the axial and torsional dynamic strains, when the strains are small, whereas, in polymerized CNTs, the viscoelasticity of the polymer and its effective electrical polarization give rise to nonlinear effects as a function of frequency and bias voltage. A simplified formula is derived to account for these effects and validated in the light of experimental results. CNT–polymer-based channels have been fabricated on a PZT substrate. Strain sensing performance of such a one-dimensional channel structure is reported. For a single frequency modulated sine pulse as input, which is common in elastic and acoustic wave-based diagnostics, imaging, microwave devices, energy harvesting, etc, the performance of the fabricated channel has been found to be promising.
Resumo:
Purpose Melanopsin-expressing retinal ganglion cells (mRGCs) have non-image forming functions including mediation of the pupil light reflex (PLR). There is limited knowledge about mRGC function in retinal disease. Initial retinal changes in age-related macular degeneration (AMD) occur in the paracentral region where mRGCs have their highest density, making them vulnerable during disease onset. In this cross-sectional clinical study, we measured the PLR to determine if mRGC function is altered in early stages of macular degeneration. Methods Pupil responses were measured in 8 early AMD patients (AREDS 2001 classification; mean age 72.6 ± 7.2 years, 5M, and 3F) and 12 healthy control participants (mean age 66.6 ± 6.1 years, 8M and 4F) using a custom-built Maxwellian-view pupillometer. Stimuli were 0.5 Hz sinewaves (10 s duration, 35.6° diameter) of short wavelength light (464nm, blue; retinal irradiance = 14.5 log quanta.cm-2.s-1) to produce high melanopsin excitation and of long wavelength light (638nm, red; retinal irradiance = 14.9 log quanta.cm-2.s-1), to bias activation to outer retina and provide a control. Baseline pupil diameter was determined during a 10 s pre-stimulus period. The post illumination pupil response (PIPR) was recorded for 40 s. The 6 s PIPR and maximum pupil constriction were expressed as percentage baseline (M ± SD). Results The blue PIPR was significantly less sustained (p<0.01) in the early AMD group (75.49 ± 7.88%) than the control group (58.28 ± 9.05%). The red PIPR was not significantly different (p>0.05) between the early AMD (84.79 ± 4.03%) and control groups (82.01 ± 5.86%). Maximum constriction amplitude in the early AMD group for blue (43.67 ± 6.35%) and red (48.64 ± 6.49%) stimuli were not significantly different to the control group for blue (39.94 ± 3.66%) and red (44.98 ± 3.15%) stimuli (p>0.05). Conclusions These results are suggestive of inner retinal mRGC deficits in early AMD. This non-invasive, objective measure of pupil responses may provide a new method for quantifying mRGC function and monitoring AMD progression.
Resumo:
The shape of tracheal cartilage has been widely treated as symmetric in analytical and numerical models. However, according to both histological images and in vivo medical image, tracheal cartilage is of highly asymmetric shape. Taking the cartilage as symmetric structure will induce bias in calculation of the collapse behavior, as well as compliance and muscular stress. However, this has been rarely discussed. In this paper, tracheal collapse is represented by considering its asymmetric shape. For comparison, the symmetric shape, which is reconstructed by half of the cartilage, is also presented. A comparison of cross-sectional area, compliance of airway and stress in the muscular membrane, determined by asymmetric shape and symmetric shape is made. The result indicates that the symmetric assumption brings a small error, around 5% in predicting the cross-sectional area under loading conditions. The relative error of compliance is more than 10%. Particularly when the pressure is close to zero, the error could be more than 50%. The model considering the symmetric shape results in a significant difference in predicting stress in muscular membrane by either under- or over-estimating it. In conclusion, tracheal cartilage should not be treated as a symmetric structure. The results obtained in this study are helpful in evaluating the error induced by the assumption in geometry.
Resumo:
Commercial environments may receive only a fraction of expected genetic gains for growth rate as predicted from the selection environment This fraction is the result of undesirable genotype-by-environment interactions (G x E) and measured by the genetic correlation (r(g)) of growth between environments. Rapid estimates of genetic correlation achieved in one generation are notoriously difficult to estimate with precision. A new design is proposed where genetic correlations can be estimated by utilising artificial mating from cryopreserved semen and unfertilised eggs stripped from a single female. We compare a traditional phenotype analysis of growth to a threshold model where only the largest fish are genotyped for sire identification. The threshold model was robust to differences in family mortality differing up to 30%. The design is unique as it negates potential re-ranking of families caused by an interaction between common maternal environmental effects and growing environment. The design is suitable for rapid assessment of G x E over one generation with a true 0.70 genetic correlation yielding standard errors as low as 0.07. Different design scenarios were tested for bias and accuracy with a range of heritability values, number of half-sib families created, number of progeny within each full-sib family, number of fish genotyped, number of fish stocked, differing family survival rates and at various simulated genetic correlation levels
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which dearly demonstrates the advantages of the rank regression models.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.
Resumo:
The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.
Resumo:
Estimation of von Bertalanffy growth parameters has received considerable attention in fisheries research. Since Sainsbury (1980, Can. J. Fish. Aquat. Sci. 37: 241-247) much of this research effort has centered on accounting for individual variability in the growth parameters. In this paper we demonstrate that, in analysis of tagging data, Sainsbury's method and its derivatives do not, in general, satisfactorily account for individual variability in growth, leading to inconsistent parameter estimates (the bias does not tend to zero as sample size increases to infinity). The bias arises because these methods do not use appropriate conditional expectations as a basis for estimation. This bias is found to be similar to that of the Fabens method. Such methods would be appropriate only under the assumption that the individual growth parameters that generate the growth increment were independent of the growth parameters that generated the initial length. However, such an assumption would be unrealistic. The results are derived analytically, and illustrated with a simulation study. Until techniques that take full account of the appropriate conditioning have been developed, the effect of individual variability on growth has yet to be fully understood.