967 resultados para Average method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The global financial crisis has underscored the need to pay attention to contingent government liabilities that could arise from bank failures for sovereign risk management. This paper proposes a simple method to construct a contingent liability index (CLI) for a banking sector that takes into account the size and concentration of the banking system, market expectations of bank defaults, and perceptions of government support to each bank. This method allows us to track potential government liabilities related to bank failures for 32 advanced and emerging economies on a monthly basis from 2006 to 2013. Furthermore, we find that the CLI is a significant determinant of sovereign CDS spreads. Our results suggest that a 1 percentage point increase in the CLI is associated with an increase in sovereign CDS spreads by 24 basis points for advanced economies and 75 basis points for emerging markets on average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background MicroRNAs (miRNAs) are known to play an important role in cancer development by post-transcriptionally affecting the expression of critical genes. The aims of this study were two-fold: (i) to develop a robust method to isolate miRNAs from small volumes of saliva and (ii) to develop a panel of saliva-based diagnostic biomarkers for the detection of head and neck squamous cell carcinoma (HNSCC). Methods Five differentially expressed miRNAs were selected from miScript™ miRNA microarray data generated using saliva from five HNSCC patients and five healthy controls. Their differential expression was subsequently confirmed by RT-qPCR using saliva samples from healthy controls (n = 56) and HNSCC patients (n = 56). These samples were divided into two different cohorts, i.e., a first confirmatory cohort (n = 21) and a second independent validation cohort (n = 35), to narrow down the miRNA diagnostic panel to three miRNAs: miR-9, miR-134 and miR-191. This diagnostic panel was independently validated using HNSCC miRNA expression data from The Cancer Genome Atlas (TCGA), encompassing 334 tumours and 39 adjacent normal tissues. Receiver operating characteristic (ROC) curve analysis was performed to assess the diagnostic capacity of the panel. Results On average 60 ng/μL miRNA was isolated from 200 μL of saliva. Overall a good correlation was observed between the microarray data and the RT-qPCR data. We found that miR-9 (P <0.0001), miR-134 (P <0.0001) and miR-191 (P <0.001) were differentially expressed between saliva from HNSCC patients and healthy controls, and that these miRNAs provided a good discriminative capacity with area under the curve (AUC) values of 0.85 (P <0.0001), 0.74 (P < 0.001) and 0.98 (P < 0.0001), respectively. In addition, we found that the salivary miRNA data showed a good correlation with the TCGA miRNA data, thereby providing an independent validation. Conclusions We show that we have developed a reliable method to isolate miRNAs from small volumes of saliva, and that the saliva-derived miRNAs miR-9, miR-134 and miR-191 may serve as novel biomarkers to reliably detect HNSCC. © 2014 International Society for Cellular Oncology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a tag-based recommender system, the multi-dimensional correlation should be modeled effectively for finding quality recommendations. Recently, few researchers have used tensor models in recommendation to represent and analyze latent relationships inherent in multi-dimensions data. A common approach is to build the tensor model, decompose it and, then, directly use the reconstructed tensor to generate the recommendation based on the maximum values of tensor elements. In order to improve the accuracy and scalability, we propose an implementation of the -mode block-striped (matrix) product for scalable tensor reconstruction and probabilistically ranking the candidate items generated from the reconstructed tensor. With testing on real-world datasets, we demonstrate that the proposed method outperforms the benchmarking methods in terms of recommendation accuracy and scalability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND In a process engineering setting, graduates are frequently allocated reviews of existing operations or required to scope new production processes by their supervisors with a view to improving or expanding on operations and overall productivity. These tasks may be carried out in teams and in consultation with the process engineer’s immediate line manager or a more experienced engineer, such as the Production or Maintenance Manager; ultimately reporting to senior management, which is frequently a non-engineer. Although professional skills development is part of engineering curricula, ‘professional conduct’ and ‘accountability’ required for dealing with peers and superiors in industry is not very well addressed at university. Consequently, upon graduation, many students are, in terms of knowledge and experience in this area, underprepared to work effectively in industry settings. PURPOSE The purpose of this study was to develop and implement a role-play scenario within a core 2nd year process engineering unit, so that students could gain knowledge, skills and experience in different aspects (and nuances) of professional conduct and accountability. DESIGN/METHOD In the role-play scenario, students worked in ‘engineering production teams’ to design a process for an iconic Queensland fruitcake and to present their solution and recommendations (culminating in a poster presentation) to an assessment panel consisting of staff, role-playing as, ‘production and plant managers’. Students were assessed on several areas, including professionalism using a criteria referenced assessment guide by a 3-member cross-disciplinary staff panel consisting of a Business Faculty lecturer, an engineer from industry and the lecturer of the Process Engineering unit. Professional conduct and accountability was gauged through direct questioning by the panel. Feedback was also sought from students on various aspects through a survey questionnaire after the role play activity at the end of semester. RESULTS Overall, the role play was very well performed with students achieving an average score of 79.3/100 (distinction grade). Professional conduct as assessed by panel was on average better than scores given for professional accountability (4.0 compared with 3.6 out of 5). Feedback from students indicated that the learning activities had contributed to their overall understanding of the content and the role of process engineers. Industry involvement was rated very highly as contributing to their learning at 4.8 (on Likert scale from 1 – 5) and the poster presentation was rated at 3.6. CONCLUSIONS This pilot study was successful in implementing a new assessment task for modelling professional conduct and accountability within a 2nd year core unit. This task incorporated a role-play activity and there was evidence to suggest that this and associated learning tasks were successful in broadening students’ understanding and skills in this area required for engineering practice. Following feedback given by students and staff, improvements will be made to the nature of the problem, how it is defined, its assessment, and the approach taken in the role-play scenario when the unit is offered in 2014.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phenomenon which dialogism addresses is human interaction. It enables us to conceptualise human interaction as intersubjective, symbolic, cultural, transformative and conflictual, in short, as complex. The complexity of human interaction is evident in all domains of human life, for example, in therapy, education, health intervention, communication, and coordination at all levels. A dialogical approach starts by acknowledging that the social world is perspectival, that people and groups inhabit different social realities. This book stands apart from the proliferation of recent books on dialogism, because rather than applying dialogism to this or that domain, the present volume focuses on dialogicality itself to interrogate the concepts and methods which are taken for granted in the burgeoning literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: We term the visual field position from which the pupil appears most nearly circular as the pupillary circular axis (PCAx). The aim was to determine and compare the horizontal and vertical co-ordinates of the PCAx and optical axis from pupil shape and refraction information for only the horizontal meridian of the visual field. Method: The PCAx was determined from the changes with visual field angle in the ellipticity and orientation of pupil images out to ±90° from fixation along the horizontal meridian for the right eyes of 30 people. This axis was compared with the optical axis determined from the changes in the astigmatic components of the refractions for field angles out to ±35° in the same meridian. Results: The mean estimated horizontal and vertical field coordinates of the PCAx were (‒5.3±1.9°, ‒3.2±1.5°) compared with (‒4.8±5.1°, ‒1.5±3.4°) for the optical axis. The vertical co-ordinates of the two axes were just significantly different (p =0.03) but there was no significant correlation between them. Only the horizontal coordinate of the PCAx was significantly related to the refraction in the group. Conclusion: On average, the PCAx is displaced from the line-of-sight by about the same angle as the optical axis but there is more inter-subject variation in the position of the optical axis. When modelling the optical performance of the eye, it appears reasonable to assume that the pupil is circular when viewed along the line-of-sight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical study is carried out to investigate the transition from laminar to chaos in mixed convection heat transfer inside a lid-driven trapezoidal enclosure. In this study, the top wall is considered as isothermal cold surface, which is moving in its own plane at a constant speed, and a constant high temperature is provided at the bottom surface. The enclosure is assumed to be filled with water-Al2O3 nanofluid. The governing Navier–Stokes and thermal energy equations are expressed in non-dimensional forms and are solved using Galerkin finite element method. Attention is paid in the present study on the pure mixed convection regime at Richandson number, Ri = 1. The numerical simulations are carried out over a wide range of Reynolds (0.1 ≤ Re ≤ 103) and Grashof (0.01 ≤ Gr ≤ 106) numbers. Effects of the presence of nanofluid on the characteristics of mixed convection heat transfer are also explored. The average Nusselt numbers of the heated wall are computed to demonstrate the influence of flow parameter variations on heat transfer. The corresponding change of flow and thermal fields is visualized from the streamline and the isotherm contour plots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.