826 resultados para Population set-based methods
Resumo:
Background: The fact that Tannerella forsythia, an important periopathogen, is difficult to cultivate from mixed infections has impeded precise estimates of its distribution within a given population. In order to discern T. forsythia alone from the mixed infection of plaque, the use of sensitive 16S ribosomal RNA based polymerase chain reaction (PCR) detection is necessary. Objectives: The aim of the present study was to determine the distribution of T. forsythia in an adult and in an adolescent population. Materials and methods: Subgingival plaque samples were obtained from 498 Australian adults and from 228 adolescent subjects from Manchester, UK. Tannerella forsythia was detected using PCR and confirmed by restriction analysis. Semi-quantitation of the organisms was carried out using two specific primers of differing sensitivities. Results: In the adolescent population, 25% were found to carry T. forsythia, albeit in relatively low numbers. In the adult population, a total of 37.8% and 11% were found to carry the organism with primer 2 and primer 1, respectively, suggesting that around 27% had between 10(3) and 10(7) organisms. Although there was an apparent increased proportion of T. forsythia positive subjects in those aged >= 50 years, this was not statistical significant. However, T. forsythia positive male smokers showed increased disease severity compared with T. forsythia negative subjects. Conclusion: This study has shown that at least 25% of the adolescent population carry low numbers of T. forsythia, whereas at least 37% of adults carry the organism, with some 11% having relatively high numbers. The relationship between T. forsythia and disease progression in these populations, however, remains to be determined.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
Gene-based tests of association are frequently applied to common SNPs (MAF>5%) as an alternative to single-marker tests. In this analysis we conduct a variety of simulation studies applied to five popular gene-based tests investigating general trends related to their performance in realistic situations. In particular, we focus on the impact of non-causal SNPs and a variety of LD structures on the behavior of these tests. Ultimately, we find that non-causal SNPs can significantly impact the power of all gene-based tests. On average, we find that the “noise” from 6–12 non-causal SNPs will cancel out the “signal” of one causal SNP across five popular gene-based tests. Furthermore, we find complex and differing behavior of the methods in the presence of LD within and between non-causal and causal SNPs. Ultimately, better approaches for a priori prioritization of potentially causal SNPs (e.g., predicting functionality of non-synonymous SNPs), application of these methods to sequenced or fully imputed datasets, and limited use of window-based methods for assigning inter-genic SNPs to genes will improve power. However, significant power loss from non-causal SNPs may remain unless alternative statistical approaches robust to the inclusion of non-causal SNPs are developed.
Resumo:
The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.
Resumo:
Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.
Resumo:
The use of remote sensing for monitoring of submerged aquatic vegetation (SAV) in fluvial environments has been limited by the spatial and spectral resolution of available image data. The absorption of light in water also complicates the use of common image analysis methods. This paper presents the results of a study that uses very high resolution (VHR) image data, collected with a Near Infrared sensitive DSLR camera, to map the distribution of SAV species for three sites along the Desselse Nete, a lowland river in Flanders, Belgium. Plant species, including Ranunculus aquatilis L., Callitriche obtusangula Le Gall, Potamogeton natans L., Sparganium emersum L. and Potamogeton crispus L., were classified from the data using Object-Based Image Analysis (OBIA) and expert knowledge. A classification rule set based on a combination of both spectral and structural image variation (e.g. texture and shape) was developed for images from two sites. A comparison of the classifications with manually delineated ground truth maps resulted for both sites in 61% overall accuracy. Application of the rule set to a third validation image, resulted in 53% overall accuracy. These consistent results show promise for species level mapping in such biodiverse environments, but also prompt a discussion on assessment of classification accuracy.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
Predicting accurate bond length alternations (BLAs) in long conjugated oligomers has been a significant challenge for electronic-structure methods for many decades, made particularly important by the close relationships between BLA and the rich optoelectronic properties of π-delocalized systems. Here, we test the accuracy of recently developed, and increasingly popular, double hybrid (DH) functionals, positioned at the top of Jacobs Ladder of DFT methods of increasing sophistication, computational cost, and accuracy, due to incorporation of MP2 correlation energy. Our test systems comprise oligomeric series of polyacetylene, polymethineimine, and polysilaacetylene up to six units long. MP2 calculations reveal a pronounced shift in BLAs between the 6-31G(d) basis set used in many studies of BLA to date and the larger cc-pVTZ basis set, but only modest shifts between cc-pVTZ and aug-cc-pVQZ results. We hence perform new reference CCSD(T)/cc-pVTZ calculations for all three series of oligomers against which we assess the performance of several families of DH functionals based on BLYP, PBE, and TPSS, along with lower-rung relatives including global- and range-separated hybrids. Our results show that DH functionals systematically improve the accuracy of BLAs relative to single hybrid functionals. xDH-PBE0 (N4 scaling using SOS-MP2) emerges as a DH functional rivaling the BLA accuracy of SCS-MP2 (N5 scaling), which was found to offer the best compromise between computational cost and accuracy the last time the BLA accuracy of DFT- and wave function-based methods was systematically investigated. Interestingly, xDH-PBE0 (XYG3), which differs to other DHs in that its MP2 term uses PBE0 (B3LYP) orbitals that are not self-consistent with the DH functional, is an outlier of trends of decreasing average BLA errors with increasing fractions of MP2 correlation and HF exchange.
Resumo:
The previous investigations have shown that the modal strain energy correlation method, MSEC, could successfully identify the damage of truss bridge structures. However, it has to incorporate the sensitivity matrix to estimate damage and is not reliable in certain damage detection cases. This paper presents an improved MSEC method where the prediction of modal strain energy change vector is differently obtained by running the eigensolutions on-line in optimisation iterations. The particular trail damage treatment group maximising the fitness function close to unity is identified as the detected damage location. This improvement is then compared with the original MSEC method along with other typical correlation-based methods on the finite element model of a simple truss bridge. The contributions to damage detection accuracy of each considered mode is also weighed and discussed. The iterative searching process is operated by using genetic algorithm. The results demonstrate that the improved MSEC method suffices the demand in detecting the damage of truss bridge structures, even when noised measurement is considered.
Resumo:
Road agencies require comprehensive, relevan and quality data describing their road assets to support their investment decisions. An investment decision support system for raod maintenance and rehabilitation mainly comprise three important supporting elements namely: road asset data, decision support tools and criteria for decision-making. Probability-based methods have played a crucial role in helping decision makers understand the relationship among road related data, asset performance and uncertainties in estimating budgets/costs for road management investment. This paper presents applications of the probability-bsed method for road asset management.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
In this paper, we present a microphone array beamforming approach to blind speech separation. Unlike previous beamforming approaches, our system does not require a-priori knowledge of the microphone placement and speaker location, making the system directly comparable other blind source separation methods which require no prior knowledge of recording conditions. Microphone location is automatically estimated using an assumed noise field model, and speaker locations are estimated using cross correlation based methods. The system is evaluated on the data provided for the PASCAL Speech Separation Challenge 2 (SSC2), achieving a word error rate of 58% on the evaluation set.
Resumo:
Background The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.
Resumo:
PURPOSE/OBJECTIVES: To determine the prevalence of malnutrition and chemotherapy-induced nausea and vomiting (CINV) limiting dietary intake in a chemotherapy unit. DESIGN Cross sectional descriptive audit. SETTING: Chemotherapy ambulatory care unit in an Australian teaching hospital. SAMPLE 121 patients receiving chemotherapy for malignancies, ≥18yrs and able to provide verbal consent. METHODS: An Accredited Practicing Dietitian collected all data. Chi-square tests were used to determine the relationship of malnutrition with variables and demographic data. MAIN RESEARCH VARIABLES: Nutritional status, weight change, BMI, prior dietetic input, CINV and CINV that limited dietary intake. FINDINGS Thirty one (26%) participants were malnourished, 12 (10%) had intake-limiting CINV, 22 (20%) reported significant weight loss and 20 (18%) required improved nutrition symptom management. High nutrition risk diagnoses, CINV, BMI and weight loss were significantly associated with malnutrition. Thirteen (35%) participants with malnutrition, significant weight loss, intake-limiting CINV and/or critically requiring improved symptom management reported no dietetic input; the majority of whom were overweight or obese. CONCLUSIONS: This audit determined over one quarter of patients receiving chemotherapy in this ambulatory setting were malnourished and the majority of patients reporting intake-limiting CINV were malnourished. IMPLICATIONS FOR NURSING Patients with malnutrition and/or intake-limiting CINV and in need of improved nutrition symptom management may be overlooked, especially patients who are overweight or obese - an increasing proportion of the Australian population. Evidence-based practice guidelines recommend implementing validated nutrition screening tools, such as the Malnutrition Screening Tool, in patients undergoing chemotherapy to identify those at risk of malnutrition requiring dietitian referral.
Resumo:
Collaborative question answering (cQA) portals such as Yahoo! Answers allow users as askers or answer authors to communicate, and exchange information through the asking and answering of questions in the network. In their current set-up, answers to a question are arranged in chronological order. For effective information retrieval, it will be advantageous to have the users’ answers ranked according to their quality. This paper proposes a novel approach of evaluating and ranking the users’answers and recommending the top-n quality answers to information seekers. The proposed approach is based on a user-reputation method which assigns a score to an answer reflecting its answer author’s reputation level in the network. The proposed approach is evaluated on a dataset collected from a live cQA, namely, Yahoo! Answers. To compare the results obtained by the non-content-based user-reputation method, experiments were also conducted with several content-based methods that assign a score to an answer reflecting its content quality. Various combinations of non-content and content-based scores were also used in comparing results. Empirical analysis shows that the proposed method is able to rank the users’ answers and recommend the top-n answers with good accuracy. Results of the proposed method outperform the content-based methods, various combinations, and the results obtained by the popular link analysis method, HITS.