896 resultados para in-depth analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. Methods We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Results Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Conclusions Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power system network is assumed to be in steady-state even during low frequency transients. However, depending on generator dynamics, and toad and control characteristics, the system model and the nature of power flow equations can vary The nature of power flow equations describing the system during a contingency is investigated in detail. It is shown that under some mild assumptions on load-voltage characteristics, the power flow equations can be decoupled in an exact manner. When the generator dynamics are considered, the solutions for the load voltages are exact if load nodes are not directly connected to each other

Relevância:

100.00% 100.00%

Publicador:

Resumo:

QUT (Queensland University of Technology) is a leading university based in the city of Brisbane, Queensland, Australia and is a selectively research intensive university with 2,500 higher degree research students and an overall student population of 45,000 students. The transition from print to online resources is largely completed and the library now provides access to 450,000 print books, 1,000 print journals, 600,000 ebooks, 120,000 ejournals and 100,000 online videos. The ebook collection is now used three times as much as the print book collection. This paper focuses on QUT Library’s ebook strategy and the challenges of building and managing a rapidly growing collection of ebooks using a range of publishers, platforms, and business and financial models. The paper provides an account of QUT Library’s experiences in using Patron Driven Acquisition (PDA) using eBook Library (EBL); the strategic procurement of publisher and subject collections by lease and outright purchase models, the more recent transition to Evidence Based Selection (EBS) options provided by some publishers, and its piloting of etextbook models. The paper provides an in-depth analysis of each of these business models at QUT, focusing on access verses collection development, usage, cost per use, and value for money.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the simple theory of flexure of beams, the slope, bending moment, shearing force, load and other quantities are functions of a derivative of y with respect to x. It is shown that the elastic curve of a transversely loaded beam can be represented by the Maclaurin series. Substitution of the values of the derivatives gives a direct solution of beam problems. In this paper the method is applied to derive the Theorem or three moments and slope deflection equations. The method is extended to the solution of a rigid portal frame. Finally the method is applied to deduce results on which the moment distribution method of analyzing rigid frames is based.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chromosomal alterations in leukemia have been shown to have prognostic and predictive significance and are also important minimal residual disease (MRD) markers in the follow-up of leukemia patients. Although specific oncogenes and tumor suppressors have been discovered in some of the chromosomal alterations, the role and target genes of many alterations in leukemia remain unknown. In addition, a number of leukemia patients have a normal karyotype by standard cytogenetics, but have variability in clinical course and are often molecularly heterogeneous. Cytogenetic methods traditionally used in leukemia analysis and diagnostics; G-banding, various fluorescence in situ hybridization (FISH) techniques, and chromosomal comparative genomic hybridization (cCGH), have enormously increased knowledge about the leukemia genome, but have limitations in resolution or in genomic coverage. In the last decade, the development of microarray comparative genomic hybridization (array-CGH, aCGH) for DNA copy number analysis and the SNP microarray (SNP-array) method for simultaneous copy number and loss of heterozygosity (LOH) analysis has enabled investigation of chromosomal and gene alterations genome-wide with high resolution and high throughput. In these studies, genetic alterations were analyzed in acute myeloid leukemia (AML) and chronic lymphocytic leukemia (CLL). The aim was to screen and characterize genomic alterations that could play role in leukemia pathogenesis by using aCGH and SNP-arrays. One of the most important goals was to screen cryptic alterations in karyotypically normal leukemia patients. In addition, chromosomal changes were evaluated to narrow the target regions, to find new markers, and to obtain tumor suppressor and oncogene candidates. The work presented here shows the capability of aCGH to detect submicroscopic copy number alterations in leukemia, with information about breakpoints and genes involved in the alterations, and that genome-wide microarray analyses with aCGH and SNP-array are advantageous methods in the research and diagnosis of leukemia. The most important findings were the cryptic changes detected with aCGH in karyotypically normal AML and CLL, characterization of amplified genes in 11q marker chromosomes, detection of deletion-based mechanisms of MLL-ARHGEF12 fusion gene formation, and detection of LOH without copy number alteration in karyotypically normal AML. These alterations harbor candidate oncogenes and tumor suppressors for further studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study was performed to investigate the value of near infrared reflectance spectroscopy (NIRS) as an alternate method to analytical techniques for identifying QTL associated with feed quality traits. Milled samples from an F6-derived recombinant inbred Tallon/Scarlett population were incubated in the rumen of fistulated cattle, recovered, washed and dried to determine the in-situ dry matter digestibility (DMD). Both pre- and post-digestion samples were analysed using NIRS to quantify key quality components relating to acid detergent fibre, starch and protein. This phenotypic data was used to identify trait associated QTL and compare them to previously identified QTL. Though a number of genetic correlations were identified between the phenotypic data sets, the only correlation of most interest was between DMD and starch digested (r = -0.382). The significance of this genetic correlation was that the NIRS data set identified a putative QTL on chromosomes 7H (LOD = 3.3) associated with starch digested. A QTL for DMD occurred in the same region of chromosome 7H, with flanking markers fAG/CAT63 and bPb-0758. The significant correlation and identification of this putative QTL, highlights the potential of technologies like NIRS in QTL analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determination of testosterone and related compounds in body fluids is of utmost importance in doping control and the diagnosis of many diseases. Capillary electromigration techniques are a relatively new approach for steroid research. Owing to their electrical neutrality, however, separation of steroids by capillary electromigration techniques requires the use of charged electrolyte additives that interact with the steroids either specifically or non-specifically. The analysis of testosterone and related steroids by non-specific micellar electrokinetic chromatography (MEKC) was investigated in this study. The partial filling (PF) technique was employed, being suitable for detection by both ultraviolet spectrophotometry (UV) and electrospray ionization mass spectrometry (ESI-MS). Efficient, quantitative PF-MEKC UV methods for steroid standards were developed through the use of optimized pseudostationary phases comprising surfactants and cyclodextrins. PF-MEKC UV proved to be a more sensitive, efficient and repeatable method for the steroids than PF-MEKC ESI-MS. It was discovered that in PF-MEKC analyses of electrically neutral steroids, ESI-MS interfacing sets significant limitations not only on the chemistry affecting the ionization and detection processes, but also on the separation. The new PF-MEKC UV method was successfully employed in the determination of testosterone in male urine samples after microscale immunoaffinity solid-phase extraction (IA-SPE). The IA-SPE method, relying on specific interactions between testosterone and a recombinant anti-testosterone Fab fragment, is the first such method described for testosterone. Finally, new data for interactions between steroids and human and bovine serum albumins were obtained through the use of affinity capillary electrophoresis. A new algorithm for the calculation of association constants between proteins and neutral ligands is introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transfer matrix method is known to be well suited for a complete analysis of a lumped as well as distributed element, one-dimensional, linear dynamical system with a marked chain topology. However, general subroutines of the type available for classical matrix methods are not available in the current literature on transfer matrix methods. In the present article, general expressions for various aspects of analysis-viz., natural frequency equation, modal vectors, forced response and filter performance—have been evaluated in terms of a single parameter, referred to as velocity ratio. Subprograms have been developed for use with the transfer matrix method for the evaluation of velocity ratio and related parameters. It is shown that a given system, branched or straight-through, can be completely analysed in terms of these basic subprograms, on a stored program digital computer. It is observed that the transfer matrix method with the velocity ratio approach has certain advantages over the existing general matrix methods in the analysis of one-dimensional systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Adjuvants enhance or modify an immune response that is made to an antigen. An antagonist of the chemokine CCR4 receptor can display adjuvant-like properties by diminishing the ability of CD4+CD25+ regulatory T cells (Tregs) to down-regulate immune responses. Methodology: Here, we have used protein modelling to create a plausible chemokine receptor model with the aim of using virtual screening to identify potential small molecule chemokine antagonists. A combination of homology modelling and molecular docking was used to create a model of the CCR4 receptor in order to investigate potential lead compounds that display antagonistic properties. Three-dimensional structure-based virtual screening of the CCR4 receptor identified 116 small molecules that were calculated to have a high affinity for the receptor; these were tested experimentally for CCR4 antagonism. Fifteen of these small molecules were shown to inhibit specifically CCR4-mediated cellmigration, including that of CCR4(+) Tregs. Significance: Our CCR4 antagonists act as adjuvants augmenting human T cell proliferation in an in vitro immune response model and compound SP50 increases T cell and antibody responses in vivo when combined with vaccine antigens of Mycobacterium tuberculosis and Plasmodium yoelii in mice.