979 resultados para ranking method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method has been developed for the quantification of 2-hydroxyethylated cysteine resulting as adduct in blood proteins after human exposure to ethylene oxide, by reversed-phase HPLC with fluorometric detection. The specific adduct is analysed in albumin and in globin. After isolation of albumin and globin from blood, acid hydrolysis of the protein and precolumn derivatisation of the digest with 9-fluorenylmethoxycarbonylchloride, the levels of derivatised S-hydroxyethylcysteine are analysed by RP-HPLC and fluorescence detection, with a detection limit of 8 nmol/g protein. Background levels of S-hydroxyethylcysteine were quantified in both albumin and globin, under special consideration of the glutathione transferase GSTT1 and GSTM1 polymorphisms. GSTT1 polymorphism had a marked influence on the physiological background alkylation of cysteine. While S-hydroxyethylcysteine levels in "non-conjugators" were between 15 and 50 nmol/g albumin, "low conjugators" displayed levels between 8 and 21 nmol/g albumin, and "high conjugators" did not show levels above the detection limit. The human GSTM1 polymorphism had no apparent effect on background levels of blood protein 2-hydroxyethylation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study demonstrates a novel technique of preparing drug colloid probes to determine the adhesion force between a model drug salbutamol sulphate (SS) and the surfaces of polymer microparticles to be used as carriers for the dispersion of drug particles from dry powder inhaler (DPI) formulations. Model silica probes of approximately 4 lm size, similar to a drug particle used in DPI formulations, were coated with a saturated SS solution with the aid of capillary forces acting between the silica probe and the drug solution. The developed method of ensuring a smooth and uniform layer of SS on the silica probe was validated using X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Using the same technique, silica microspheres pre-attached on the AFM cantilever were coated with SS. The adhesion forces between the silica probe and drug coated silica (drug probe) and polymer surfaces (hydrophilic and hydrophobic) were determined. Our experimental results showed that the technique for preparing the drug probe was robust and can be used to determine the adhesion force between hydrophilic/ hydrophobic drug probe and carrier surfaces to gain a better understanding on drug carrier adhesion forces in DPI formulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The properties of CdS nanoparticles incorporated onto mesoporous TiO2 films by a successive ionic layer adsorption and reaction (SILAR) method were investigated by Raman spectroscopy, UV-visible spectroscopy, transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS). High resolution TEM indicated that the synthesized CdS particles were hexagonal phase and the particle sizes were less than 5 nm when SILAR cycles were fewer than 9. Quantum size effect was found with the CdS sensitized TiO2 films prepared with up to 9 SILAR cycles. The band gap of CdS nanoparticles decreased from 2.65 eV to 2.37 eV with the increase of the SILAR cycles from 1 to 11. The investigation of the stability of the CdS/TiO2 films in air under illumination (440.6 µW/cm2) showed that the photodegradation rate was up to 85% per day for the sample prepared with 3 SILAR cycles. XPS analysis indicated that the photodegradation was due to the oxidation of CdS, leading to the transformation from sulphide to sulphate (CdSO4). Furthermore, the degradation rate was strongly dependent upon the particle size of CdS. Smaller particles showed faster degradation rate. The size-dependent photo-induced oxidization was rationalized with the variation of size-dependent distribution of surface atoms of CdS particles. Molecular Dynamics (MD) simulation has indicated that the surface sulphide anion of a large CdS particle such as CdS made with 11 cycles (CdS11, particle size = 5.6 nm) accounts for 9.6% of the material whereas this value is increased to 19.2% for (CdS3) based smaller particles (particle size: 2.7 nm). Nevertheless, CdS nanoparticles coated with ZnS material showed a significantly enhanced stability under illumination in air. A nearly 100% protection of CdS from photon induced oxidation with a ZnS coating layer prepared using four SILAR cycles, suggesting the formation of a nearly complete coating layer on the CdS nanoparticles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment , should be appropriately modelled in order to create the user profiles [1]. Secondly, the semantics behind the tags should be considered properly as the flexibility with their design can cause semantic problems such as synonymy and polysemy [2]. This research proposes to address these two challenges for building a tag-based item recommendation system by employing tensor modeling as the multi-dimensional user profile approach, and the topic model as the semantic analysis approach. The first objective is to optimize the tensor model reconstruction and to improve the model performance in generating quality rec-ommendation. A novel Tensor-based Recommendation using Probabilistic Ranking (TRPR) method [3] has been developed. Results show this method to be scalable for large datasets and outperforming the benchmarking methods in terms of accuracy. The memory efficient loop implements the n-mode block-striped (matrix) product for tensor reconstruction as an approximation of the initial tensor. The probabilistic ranking calculates the probabil-ity of users to select candidate items using their tag preference list based on the entries generated from the reconstructed tensor. The second objective is to analyse the tag semantics and utilize the outcome in building the tensor model. This research proposes to investigate the problem using topic model approach to keep the tags nature as the “social vocabulary” [4]. For the tag assignment data, topics can be generated from the occurrences of tags given for an item. However there is only limited amount of tags availa-ble to represent items as collection of topics, since an item might have only been tagged by using several tags. Consequently, the generated topics might not able to represent the items appropriately. Furthermore, given that each tag can belong to any topics with various probability scores, the occurrence of tags cannot simply be mapped by the topics to build the tensor model. A standard weighting technique will not appropriately calculate the value of tagging activity since it will define the context of an item using a tag instead of a topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phenomenon which dialogism addresses is human interaction. It enables us to conceptualise human interaction as intersubjective, symbolic, cultural, transformative and conflictual, in short, as complex. The complexity of human interaction is evident in all domains of human life, for example, in therapy, education, health intervention, communication, and coordination at all levels. A dialogical approach starts by acknowledging that the social world is perspectival, that people and groups inhabit different social realities. This book stands apart from the proliferation of recent books on dialogism, because rather than applying dialogism to this or that domain, the present volume focuses on dialogicality itself to interrogate the concepts and methods which are taken for granted in the burgeoning literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND As engineering schools adopt outcomes - focused learning approaches in response to government expectations and industry requirements of graduates capable of learning and applying knowledge in different contexts, university academics must be capable of developing and delivering programs that meet these requirements. Those academics are increasingly facing challenges in progressing their research and also acquiring different skill sets to meet the learning and teaching requirements. PURPOSE The goal of this study was to identify the types of development and support structures in place for academic staff, especially early career ones, and examine how the type of institution and the rank or role of the staff member affects these structures. DESIGN/METHOD We conducted semi - structured interviews with 21 individuals in a range of positions pertaining to teaching and learning in engineering education. Open coding was used to identify main themes from the guiding questions raised in the interviews and refined to address themes relevant to the development of institutional staff . The interview data was then analysed based on the type of institution and the rank/ role of the participant. RESULTS While development programs that focus on improving teaching and learning are available, the approach on using these types of programs differed based on staff perspective. Fewer academics, regardless of rank/role, had knowledge of support structures related to other areas of scholarship, e.g. disciplinary research, educational research, learning the institutional culture. The type of institution also impacted how they weighted and encouraged multiple forms of scholarship. We found that academic staff holding higher ranking positions, e.g. dean or associate dean, were not only concerned with the success of their respective programs, but also in how to promote other academic staff participation throughout the process. CONCLUSIONS The findings from this stud y extend the premise that developing effective academic staff ultimately leads to more effective institutions and successful graduates and accomplishing this requires staff buy - in at multiple stages of instructional and program development. Staff and administration developing approaches for educational innovation together (Besterfield - Sacre et al., 2014) and getting buy - in from all academic staff to invest in engineering education development will ultimately lead to more successful engineering graduates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes our participation in the Chinese word segmentation task of CIPS-SIGHAN 2010. We implemented an n-gram mutual information (NGMI) based segmentation algorithm with the mixed-up features from unsupervised, supervised and dictionarybased segmentation methods. This algorithm is also combined with a simple strategy for out-of-vocabulary (OOV) word recognition. The evaluation for both open and closed training shows encouraging results of our system. The results for OOV word recognition in closed training evaluation were however found unsatisfactory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, integration of small-scale electricity generators, known as Distributed Generation (DG), into distribution networks has become increasingly popular. This tendency together with the falling price of DG units has a great potential in giving the DG a better chance to participate in voltage regulation process, in parallel with other regulating devices already available in the distribution systems. The voltage control issue turns out to be a very challenging problem for distribution engineers, since existing control coordination schemes need to be reconsidered to take into account the DG operation. In this paper, a control coordination approach is proposed, which is able to utilize the ability of the DG as a voltage regulator, and at the same time minimize the interaction of DG with another DG or other active devices, such as On-load Tap Changing Transformer (OLTC). The proposed technique has been developed based on the concepts of protection principles (magnitude grading and time grading) for response coordination of DG and other regulating devices and uses Advanced Line Drop Compensators (ALDCs) for implementation. A distribution feeder with tap changing transformer and DG units has been extracted from a practical system to test the proposed control technique. The results show that the proposed method provides an effective solution for coordination of DG with another DG or voltage regulating devices and the integration of protection principles has considerably reduced the control interaction to achieve the desired voltage correction.