58 resultados para Large-scale Analysis
Resumo:
This article explores consumer Web-search satisfaction. It commences with a brief overview of the concepts consumer information search and consumer satisfaction. Consumer Web adoption issues are then briefly discussed and the importance of consumer search satisfaction is highlighted in relation to the adoption of the Web as an additional source of consumer information. Research hypotheses are developed and the methodology of a large scale consumer experiment to record consumer Web search behaviour is described. The hypotheses are tested and the data explored in relation to post-Web-search satisfaction. The results suggest that consumer post-Web-search satisfaction judgments may be derived from subconscious judgments of Web search efficiency, an empirical calculation of which is problematic in unlimited information environments such as the Web. The results are discussed and a future research agenda is briefly outlined.
Resumo:
In high-velocity free-surface flows, air is continuously being trapped and released through the free-surface. Such high-velocity highly-aerated flows cannot be studied numerically because of the large number of relevant equations and parameters. Herein an advanced signal processing of traditional single- and dual-tip conductivity probes provides some new information on the air-water turbulent time and length scales. The technique is applied to turbulent open channel flows in a large-size facility. The auto- and cross-correlation analyses yield some characterisation of the large eddies advecting the bubbles. The transverse integral turbulent length and time scales are related to the step height: i.e., Lxy/h ~ 0.02 to 0.2, and T.sqrt(g/h) ~ 0.004 to 0.04. The results are irrespective of the Reynolds numbers. The present findings emphasise that turbulent dissipation by large-scale vortices is a significant process in the intermediate zone between the spray and bubbly flow regions (0.3 < C < 0.7). Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. The results are significant because they provide a picture general enough to be used to characterise the air-water flow field in prototype spillways.
Resumo:
Methods employing continuum approximation in describing the deformation of layered materials possess a clear advantage over explicit models, However, the conventional implicit models based on the theory of anisotropic continua suffers from certain difficulties associated with interface slip and internal instabilities. These difficulties can be remedied by considering the bending stiffness of the layers. This implies the introduction of moment (couple) stresses and internal rotations, which leads to a Cosserat-type theory. In the present model, the behaviour of the layered material is assumed to be linearly elastic; the interfaces are assumed to be elastic perfectly plastic. Conditions of slip or no slip at the interfaces are detected by a Coulomb criterion with tension cut off at zero normal stress. The theory is valid for large deformation analysis. The model is incorporated into the finite element program AFENA and validated against analytical solutions of elementary buckling problems in layered medium. A problem associated with buckling of the roof and the floor of a rectangular excavation in jointed rock mass under high horizontal in situ stresses is considered as the main application of the theory. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
Resumo:
The literature examining purported relationships between ownership of companion animals and health is extremely heterogeneous. While much of the descriptive literature tends to support benefits of animal companionship, large scale, controlled research yields inconsistent and even contradictory findings on several issues, including associations with cardiovascular disease, mood and wellbeing. In an analysis of a large longitudinal data-set from the Australian Longitudinal Study on Women's Health, a prospective study of a nationally representative sample of more than 12,000 older women, difficulties with disentangling the effects of powerful demographic variables and age-related factors from the specific effects of pet ownership became apparent. Both cross-sectional and longitudinal analyses demonstrated that associations between mental and physical health and pet ownership as well as changes in pet ownership over time were weak and inconsistent compared to the large effects of living arrangements and other demographic variables. As sociodemographic variables relate strongly to both health and opportunities for pet ownership, this high level of confounding means it is unlikely that the impact of the specific variable of pet ownership on health can be ascertained from such studies. Rather, well-designed experimental studies, wherein the majority of such confounding variables can be held constant or at least somewhat controlled, are needed.
Resumo:
The report was commissioned by the Department of Education, Science and Training to investigate the perceived efficacy of middle years programmes in all States and Territories in improving the quality of teaching, learning and student outcomes, especially in literacy and numeracy and for student members of particular target groups. These target groups included students from lower socio-economic communities, Aboriginal and Torres Strait Islander communities, students with a language background other than English, rural and remote students, and students struggling with the transition from middle/upper primary to the junior secondary years. The project involved large scale national and international literature reviews on Australian and international middle years approaches as well as an analysis of key literacy and numeracy teaching and learning strategies being used. In the report, there is emergent evidence of the relative efficacy of a combination of explicit state policy, dedicated funding and curriculum and professional development frameworks that are focused on the improvement of classroom pedagogy in the middle years. The programs that evidenced the greatest current and potential value for target group students tended to have developed in state policy environments that encouraged a structural rather than adjunct approach to middle years innovations. The authors conclude that in order to translate the gains made into sustainable improvement of educational results in literacy and numeracy for target groups, there is a need for a second generation of middle years theorising, research, development and practice.
Resumo:
There is overwhelming evidence for the existence of substantial genetic influences on individual differences in general and specific cognitive abilities, especially in adults. The actual localization and identification of genes underlying variation in cognitive abilities and intelligence has only just started, however. Successes are currently limited to neurological mutations with rather severe cognitive effects. The current approaches to trace genes responsible for variation in the normal ranges of cognitive ability consist of large scale linkage and association studies. These are hampered by the usual problems of low statistical power to detect quantitative trait loci (QTLs) of small effect. One strategy to boost the power of genomic searches is to employ endophenotypes of cognition derived from the booming field of cognitive neuroscience This special issue of Behavior Genetics reports on one of the first genome-wide association studies for general IQ. A second paper summarizes candidate genes for cognition, based on animal studies. A series of papers then introduces two additional levels of analysis in the ldquoblack boxrdquo between genes and cognitive ability: (1) behavioral measures of information-processing speed (inspection time, reaction time, rapid naming) and working memory capacity (performance on on single or dual tasks of verbal and spatio-visual working memory), and (2) electrophyiosological derived measures of brain function (e.g., event-related potentials). The obvious way to assess the reliability and validity of these endophenotypes and their usefulness in the search for cognitive ability genes is through the examination of their genetic architecture in twin family studies. Papers in this special issue show that much of the association between intelligence and speed-of-information processing/brain function is due to a common gene or set of genes, and thereby demonstrate the usefulness of considering these measures in gene-hunting studies for IQ.
Resumo:
A new method has been established to define the limits on a spontaneous mutation rate for a gene in Plasmodium falciparum. The method combines mathematical modelling and large-scale in vitro culturing and calculates the difference in mutant frequencies at 2 separate time-points. We measured the mutation rate at 2 positions in the dihydrofolate reductase (DHFR) gene of 3D7, a pyrimethamine-sensitive line of P. fulciparum. This line was re-cloned and an effectively large population was treated with a selective pyrimethamine concentration of 40 nM. We detected point mutations at codon-46 (TTA to TCA) and codon-108 (ACC to AAC), resulting in serine replacing leucine and asparagine replacing serine respectively in the corresponding gene product. The substitutions caused a decrease in pyrimethamine sensitivity. By mathematical modelling we determined that the mutation rate at a given position in DHFR was low and occurred at less than 2(.)5 x 10(-9) mutations/DHFR gene/replication. This result has important implications for Plasmodium genetic diversity and antimalarial drug therapy by demonstrating that even with lon mutation rates anti-malarial resistance will inevitably arise when mutant alleles are selected under drug pressure.
Resumo:
Breast cancer is the most common form of cancer among women and the identification of markers to discriminate tumorigenic from normal cells, as well as the different stages of this pathology, is of critical importance. Two-dimensional electrophoresis has been used before for studying breast cancer, but the progressive completion of human genomic sequencing and the introduction of mass spectrometry, combined with advanced bioinformatics for protein identification, have considerably increased the possibilities for characterizing new markers and therapeutic targets. Breast cancer proteomics has already identified markers of potential clinical interest (such as the molecular chaperone 14-3-3 sigma) and technological innovations such as large scale and high throughput analysis are now driving the field. Methods in functional proteomics have also been developed to study the intracellular signaling pathways that underlie the development of breast cancer. As illustrated with fibroblast growth factor-2, a mitogen and motogen factor for breast cancer cells, proteomics is a powerful approach to identify signaling proteins and to decipher the complex signaling circuitry involved in tumor growth. Together with genomics, proteomics is well on the way to molecularly characterizing the different types of breast tumor, and thus defining new therapeutic targets for future treatment.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: To describe what is known of quality of life for colorectal cancer patients, to review what has been done in the Australian setting and to identify emerging directions for future research to address current gaps in knowledge. Method: A literature search (using Medline, PsychInfo, CINAHL and Sociological Abstracts) was conducted and 41 articles identified for review. Results: Three key areas relating to quality of life in colorectal cancer patients emerged from the literature review: the definition and measurement of quality of life; predictors of quality of life; and the relationship of quality of life to survival. Results of existing studies are inconsistent in relation to quality of life over time and its relationship to survival. Small sample sizes and methodological limitations make interpretation difficult. Conclusions: There is a need for large-scale, longitudinal, population-based studies describing the quality of life experienced by colorectal cancer patients and its determinants. Measurement and simultaneous adjustment for potential confounding factors would productively advance knowledge in this area, as would an analysis of the economic cost of morbidity to the community and an assessment of the cost effectiveness of proposed interventions. Implications: As the Australian population ages, the prevalence of colorectal cancer within the community will increase. This burden of disease presents as a priority area for public health research. An improved understanding of quality of life and its predictors will inform the development and design of supportive interventions for those affected by the disease.
Resumo:
The impacts of climate change in the potential distribution and relative abundance of a C3 shrubby vine, Cryptostegia grandiflora, were investigated using the CLIMEX modelling package. Based upon its current naturalised distribution, C. grandiflora appears to occupy only a small fraction of its potential distribution in Australia under current climatic conditions; mostly in apparently sub-optimal habitat. The potential distribution of C. grandiflora is sensitive towards changes in climate and atmospheric chemistry in the expected range of this century, particularly those that result in increased temperature and water use efficiency. Climate change is likely to increase the potential distribution and abundance of the plant, further increasing the area at risk of invasion, and threatening the viability of current control strategies markedly. By identifying areas at risk of invasion, and vulnerabilities of control strategies, this analysis demonstrates the utility of climate models for providing information suitable to help formulate large-scale, long-term strategic plans for controlling biotic invasions. The effects of climate change upon the potential distribution of C. grandiflora are sufficiently great that strategic control plans for biotic invasions should routinely include their consideration. Whilst the effect of climate change upon the efficacy of introduced biological control agents remain unknown, their possible effect in the potential distribution of C. grandiflora will likely depend not only upon their effects on the population dynamics of C. grandiflora, but also on the gradient of climatic suitability adjacent to each segment of the range boundary.