959 resultados para Delphi method
Resumo:
A new method has been developed for the quantification of 2-hydroxyethylated cysteine resulting as adduct in blood proteins after human exposure to ethylene oxide, by reversed-phase HPLC with fluorometric detection. The specific adduct is analysed in albumin and in globin. After isolation of albumin and globin from blood, acid hydrolysis of the protein and precolumn derivatisation of the digest with 9-fluorenylmethoxycarbonylchloride, the levels of derivatised S-hydroxyethylcysteine are analysed by RP-HPLC and fluorescence detection, with a detection limit of 8 nmol/g protein. Background levels of S-hydroxyethylcysteine were quantified in both albumin and globin, under special consideration of the glutathione transferase GSTT1 and GSTM1 polymorphisms. GSTT1 polymorphism had a marked influence on the physiological background alkylation of cysteine. While S-hydroxyethylcysteine levels in "non-conjugators" were between 15 and 50 nmol/g albumin, "low conjugators" displayed levels between 8 and 21 nmol/g albumin, and "high conjugators" did not show levels above the detection limit. The human GSTM1 polymorphism had no apparent effect on background levels of blood protein 2-hydroxyethylation.
Resumo:
This study demonstrates a novel technique of preparing drug colloid probes to determine the adhesion force between a model drug salbutamol sulphate (SS) and the surfaces of polymer microparticles to be used as carriers for the dispersion of drug particles from dry powder inhaler (DPI) formulations. Model silica probes of approximately 4 lm size, similar to a drug particle used in DPI formulations, were coated with a saturated SS solution with the aid of capillary forces acting between the silica probe and the drug solution. The developed method of ensuring a smooth and uniform layer of SS on the silica probe was validated using X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Using the same technique, silica microspheres pre-attached on the AFM cantilever were coated with SS. The adhesion forces between the silica probe and drug coated silica (drug probe) and polymer surfaces (hydrophilic and hydrophobic) were determined. Our experimental results showed that the technique for preparing the drug probe was robust and can be used to determine the adhesion force between hydrophilic/ hydrophobic drug probe and carrier surfaces to gain a better understanding on drug carrier adhesion forces in DPI formulations.
Resumo:
The properties of CdS nanoparticles incorporated onto mesoporous TiO2 films by a successive ionic layer adsorption and reaction (SILAR) method were investigated by Raman spectroscopy, UV-visible spectroscopy, transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS). High resolution TEM indicated that the synthesized CdS particles were hexagonal phase and the particle sizes were less than 5 nm when SILAR cycles were fewer than 9. Quantum size effect was found with the CdS sensitized TiO2 films prepared with up to 9 SILAR cycles. The band gap of CdS nanoparticles decreased from 2.65 eV to 2.37 eV with the increase of the SILAR cycles from 1 to 11. The investigation of the stability of the CdS/TiO2 films in air under illumination (440.6 µW/cm2) showed that the photodegradation rate was up to 85% per day for the sample prepared with 3 SILAR cycles. XPS analysis indicated that the photodegradation was due to the oxidation of CdS, leading to the transformation from sulphide to sulphate (CdSO4). Furthermore, the degradation rate was strongly dependent upon the particle size of CdS. Smaller particles showed faster degradation rate. The size-dependent photo-induced oxidization was rationalized with the variation of size-dependent distribution of surface atoms of CdS particles. Molecular Dynamics (MD) simulation has indicated that the surface sulphide anion of a large CdS particle such as CdS made with 11 cycles (CdS11, particle size = 5.6 nm) accounts for 9.6% of the material whereas this value is increased to 19.2% for (CdS3) based smaller particles (particle size: 2.7 nm). Nevertheless, CdS nanoparticles coated with ZnS material showed a significantly enhanced stability under illumination in air. A nearly 100% protection of CdS from photon induced oxidation with a ZnS coating layer prepared using four SILAR cycles, suggesting the formation of a nearly complete coating layer on the CdS nanoparticles.
Resumo:
BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.
Resumo:
The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.
Resumo:
In a tag-based recommender system, the multi-dimensional
Resumo:
Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
The phenomenon which dialogism addresses is human interaction. It enables us to conceptualise human interaction as intersubjective, symbolic, cultural, transformative and conflictual, in short, as complex. The complexity of human interaction is evident in all domains of human life, for example, in therapy, education, health intervention, communication, and coordination at all levels. A dialogical approach starts by acknowledging that the social world is perspectival, that people and groups inhabit different social realities. This book stands apart from the proliferation of recent books on dialogism, because rather than applying dialogism to this or that domain, the present volume focuses on dialogicality itself to interrogate the concepts and methods which are taken for granted in the burgeoning literature.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.
Resumo:
The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.