612 resultados para approximation method
Resumo:
This study demonstrates a novel technique of preparing drug colloid probes to determine the adhesion force between a model drug salbutamol sulphate (SS) and the surfaces of polymer microparticles to be used as carriers for the dispersion of drug particles from dry powder inhaler (DPI) formulations. Model silica probes of approximately 4 lm size, similar to a drug particle used in DPI formulations, were coated with a saturated SS solution with the aid of capillary forces acting between the silica probe and the drug solution. The developed method of ensuring a smooth and uniform layer of SS on the silica probe was validated using X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Using the same technique, silica microspheres pre-attached on the AFM cantilever were coated with SS. The adhesion forces between the silica probe and drug coated silica (drug probe) and polymer surfaces (hydrophilic and hydrophobic) were determined. Our experimental results showed that the technique for preparing the drug probe was robust and can be used to determine the adhesion force between hydrophilic/ hydrophobic drug probe and carrier surfaces to gain a better understanding on drug carrier adhesion forces in DPI formulations.
Resumo:
The properties of CdS nanoparticles incorporated onto mesoporous TiO2 films by a successive ionic layer adsorption and reaction (SILAR) method were investigated by Raman spectroscopy, UV-visible spectroscopy, transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS). High resolution TEM indicated that the synthesized CdS particles were hexagonal phase and the particle sizes were less than 5 nm when SILAR cycles were fewer than 9. Quantum size effect was found with the CdS sensitized TiO2 films prepared with up to 9 SILAR cycles. The band gap of CdS nanoparticles decreased from 2.65 eV to 2.37 eV with the increase of the SILAR cycles from 1 to 11. The investigation of the stability of the CdS/TiO2 films in air under illumination (440.6 µW/cm2) showed that the photodegradation rate was up to 85% per day for the sample prepared with 3 SILAR cycles. XPS analysis indicated that the photodegradation was due to the oxidation of CdS, leading to the transformation from sulphide to sulphate (CdSO4). Furthermore, the degradation rate was strongly dependent upon the particle size of CdS. Smaller particles showed faster degradation rate. The size-dependent photo-induced oxidization was rationalized with the variation of size-dependent distribution of surface atoms of CdS particles. Molecular Dynamics (MD) simulation has indicated that the surface sulphide anion of a large CdS particle such as CdS made with 11 cycles (CdS11, particle size = 5.6 nm) accounts for 9.6% of the material whereas this value is increased to 19.2% for (CdS3) based smaller particles (particle size: 2.7 nm). Nevertheless, CdS nanoparticles coated with ZnS material showed a significantly enhanced stability under illumination in air. A nearly 100% protection of CdS from photon induced oxidation with a ZnS coating layer prepared using four SILAR cycles, suggesting the formation of a nearly complete coating layer on the CdS nanoparticles.
Resumo:
Ab-initio DFT calculations for the phonon dispersion (PD) and the Phonon Density Of States (PDOS) of the two isotopic forms (10B and 11B) of MgB2 demonstrate that use of a reduced symmetry super-lattice provides an improved approximation to the dynamical, phonon-distorted P6/mmm crystal structure. Construction of phonon frequency plots using calculated values for these isotopic forms gives linear trends with integer multiples of a base frequency that change in slope in a manner consistent with the isotope effect (IE). Spectral parameters inferred from this method are similar to that determined experimentally for the pure isotopic forms of MgB2. Comparison with AlB2 demonstrates that a coherent phonon decay down to acoustic modes is not possible for this metal. Coherent acoustic phonon decay may be an important contributor to superconductivity for MgB2.
Resumo:
BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.
Resumo:
The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.
Resumo:
In a tag-based recommender system, the multi-dimensional
A tag-based personalized item recommendation system using tensor modeling and topic model approaches
Resumo:
This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment
Resumo:
Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.
Resumo:
This thesis is a study on controlling methods for six-legged robots. The study is based on mathematical modeling and simulation. A new joint controller is proposed and tested in simulation that uses joint angles and leg reaction force as inputs to generate a torque, and a method to optimise this controller is formulated and validated. Simulation shows that hexapod can walk on flat ground based on PID controllers with just four target configurations and a set of leg coordination rules, which provided the basis for the design of the new controller.
Resumo:
The phenomenon which dialogism addresses is human interaction. It enables us to conceptualise human interaction as intersubjective, symbolic, cultural, transformative and conflictual, in short, as complex. The complexity of human interaction is evident in all domains of human life, for example, in therapy, education, health intervention, communication, and coordination at all levels. A dialogical approach starts by acknowledging that the social world is perspectival, that people and groups inhabit different social realities. This book stands apart from the proliferation of recent books on dialogism, because rather than applying dialogism to this or that domain, the present volume focuses on dialogicality itself to interrogate the concepts and methods which are taken for granted in the burgeoning literature.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
This paper describes our participation in the Chinese word segmentation task of CIPS-SIGHAN 2010. We implemented an n-gram mutual information (NGMI) based segmentation algorithm with the mixed-up features from unsupervised, supervised and dictionarybased segmentation methods. This algorithm is also combined with a simple strategy for out-of-vocabulary (OOV) word recognition. The evaluation for both open and closed training shows encouraging results of our system. The results for OOV word recognition in closed training evaluation were however found unsatisfactory.