288 resultados para Statistical tools
Resumo:
Statistical reports of SMEs Internet usage from various countries indicate a steady growth. However, deeper investigation of SME’s e-commerce adoption and usage reveals that a number of SMEs fail to realize the full potential of e-commerce. Factors such as lack of tools and models in Information Systems and Information Technology for SMEs, and lack of technical expertise and specialized knowledge within and outside the SME have the most effect. This study aims to address the two important factors in two steps. First, introduce the conceptual tool for intuitive interaction. Second, explain the implementation process of the conceptual tool with the help of a case study. The subject chosen for the case study is a real estate SME from India. The design and development process of the website for the real estate SME was captured in this case study and the duration of the study was four months. Results indicated specific benefits for web designers and SME business owners. Results also indicated that the conceptual tool is easy to use without the need for technical expertise and specialized knowledge.
Resumo:
Biotechnology has the potential to improve sugar cane, one of the world's major crops for food and fuel. This research describes the detailed characterisation of introns and their potential for enhancing transgene expression in sugar cane via intron-mediated enhancement (IME). IME is a phenomenon whereby an intron enhances gene expression from a promoter. Current knowledge on the mechanism of IME or its potential for enhancing gene expression in sugar cane is limited. A better understanding of the factors responsible for IME will help develop new molecular tools that facilitate high levels of constitutive and tissue-specific gene expression in this crop.
Resumo:
The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
Perceiving students, science students especially, as mere consumers of facts and information belies the importance of a need to engage them with the principles underlying those facts and is counter-intuitive to the facilitation of knowledge and understanding. Traditional didactic lecture approaches need a re-think if student classroom engagement and active learning are to be valued over fact memorisation and fact recall. In our undergraduate biomedical science programs across Years 1, 2 and 3 in the Faculty of Health at QUT, we have developed an authentic learning model with an embedded suite of pedagogical strategies that foster classroom engagement and allow for active learning in the sub-discipline area of medical bacteriology. The suite of pedagogical tools we have developed have been designed to enable their translation, with appropriate fine-tuning, to most biomedical and allied health discipline teaching and learning contexts. Indeed, aspects of the pedagogy have been successfully translated to the nursing microbiology study stream at QUT. The aims underpinning the pedagogy are for our students to: (1) Connect scientific theory with scientific practice in a more direct and authentic way, (2) Construct factual knowledge and facilitate a deeper understanding, and (3) Develop and refine their higher order flexible thinking and problem solving skills, both semi-independently and independently. The mindset and role of the teaching staff is critical to this approach since for the strategy to be successful tertiary teachers need to abandon traditional instructional modalities based on one-way information delivery. Face-to-face classroom interactions between students and lecturer enable realisation of pedagogical aims (1), (2) and (3). The strategy we have adopted encourages teachers to view themselves more as expert guides in what is very much a student-focused process of scientific exploration and learning. Specific pedagogical strategies embedded in the authentic learning model we have developed include: (i) interactive lecture-tutorial hybrids or lectorials featuring teacher role-plays as well as class-level question-and-answer sessions, (ii) inclusion of “dry” laboratory activities during lectorials to prepare students for the wet laboratory to follow, (iii) real-world problem-solving exercises conducted during both lectorials and wet laboratory sessions, and (iv) designing class activities and formative assessments that probe a student’s higher order flexible thinking skills. Flexible thinking in this context encompasses analytical, critical, deductive, scientific and professional thinking modes. The strategic approach outlined above is designed to provide multiple opportunities for students to apply principles flexibly according to a given situation or context, to adapt methods of inquiry strategically, to go beyond mechanical application of formulaic approaches, and to as much as possible self-appraise their own thinking and problem solving. The pedagogical tools have been developed within both workplace (real world) and theoretical frameworks. The philosophical core of the pedagogy is a coherent pathway of teaching and learning which we, and many of our students, believe is more conducive to student engagement and active learning in the classroom. Qualitative and quantitative data derived from online and hardcopy evaluations, solicited and unsolicited student and graduate feedback, anecdotal evidence as well as peer review indicate that: (i) our students are engaging with the pedagogy, (ii) a constructivist, authentic-learning approach promotes active learning, and (iii) students are better prepared for workplace transition.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
The number of genetic factors associated with common human traits and disease is increasing rapidly, and the general public is utilizing affordable, direct-to-consumer genetic tests. The results of these tests are often in the public domain. A combination of factors has increased the potential for the indirect estimation of an individual's risk for a particular trait. Here we explain the basic principals underlying risk estimation which allowed us to test the ability to make an indirect risk estimation from genetic data by imputing Dr. James Watson's redacted apolipoprotein E gene (APOE) information. The principles underlying risk prediction from genetic data have been well known and applied for many decades, however, the recent increase in genomic knowledge, and advances in mathematical and statistical techniques and computational power, make it relatively easy to make an accurate but indirect estimation of risk. There is a current hazard for indirect risk estimation that is relevant not only to the subject but also to individuals related to the subject; this risk will likely increase as more detailed genomic data and better computational tools become available.
Resumo:
The commonly used "end diagnosis" phenotype that is adopted in linkage and association studies of complex traits is likely to represent an oversimplified model of the genetic background of a disease. This is also likely to be the case for common types of migraine, for which no convincingly associated genetic variants have been reported. In headache disorders, most genetic studies have used end diagnoses of the International Headache Society (IHS) classification as phenotypes. Here, we introduce an alternative strategy; we use trait components--individual clinical symptoms of migraine--to determine affection status in genomewide linkage analyses of migraine-affected families. We identified linkage between several traits and markers on chromosome 4q24 (highest LOD score under locus heterogeneity [HLOD] 4.52), a locus we previously reported to be linked to the end diagnosis migraine with aura. The pulsation trait identified a novel locus on 17p13 (HLOD 4.65). Additionally, a trait combination phenotype (IHS full criteria) revealed a locus on 18q12 (HLOD 3.29), and the age at onset trait revealed a locus on 4q28 (HLOD 2.99). Furthermore, suggestive or nearly suggestive evidence of linkage to four additional loci was observed with the traits phonophobia (10q22) and aggravation by physical exercise (12q21, 15q14, and Xp21), and, interestingly, these loci have been linked to migraine in previous studies. Our findings suggest that the use of symptom components of migraine instead of the end diagnosis provides a useful tool in stratifying the sample for genetic studies.
Resumo:
The past decade has brought a proliferation of statistical genetic (linkage) analysis techniques, incorporating new methodology and/or improvement of existing methodology in gene mapping, specifically targeted towards the localization of genes underlying complex disorders. Most of these techniques have been implemented in user-friendly programs and made freely available to the genetics community. Although certain packages may be more 'popular' than others, a common question asked by genetic researchers is 'which program is best for me?'. To help researchers answer this question, the following software review aims to summarize the main advantages and disadvantages of the popular GENEHUNTER package.
Resumo:
Disease maps are effective tools for explaining and predicting patterns of disease outcomes across geographical space, identifying areas of potentially elevated risk, and formulating and validating aetiological hypotheses for a disease. Bayesian models have become a standard approach to disease mapping in recent decades. This article aims to provide a basic understanding of the key concepts involved in Bayesian disease mapping methods for areal data. It is anticipated that this will help in interpretation of published maps, and provide a useful starting point for anyone interested in running disease mapping methods for areal data. The article provides detailed motivation and descriptions on disease mapping methods by explaining the concepts, defining the technical terms, and illustrating the utility of disease mapping for epidemiological research by demonstrating various ways of visualising model outputs using a case study. The target audience includes spatial scientists in health and other fields, policy or decision makers, health geographers, spatial analysts, public health professionals, and epidemiologists.
Resumo:
Despite being commonly prevalent in acute care hospitals worldwide, malnutrition often goes unidentified and untreated due to a lack in the implementation of a nutrition care pathway. The aim of this study was to validate nutrition screening and assessment tools in Vietnamese language. After converting into Vietnamese, Malnutrition Screening Tool (MST) and Subjective Global Assessment (SGA) were used to identify malnutrition in the adult setting; and the Paediatric Nutrition Screening Tool (PNST) and paediatric Subjective Global Nutritional Assessment (SGNA) were used in the paediatric setting in two acute care hospitals in Vietnam. This cross-sectional observational study sampled 123 adults (median age 78 years [39–96 years], 63% males) and 105 children (median age 20 months [2–100 months], 66% males). In adults, nutrition risk and malnutrition were identified in 29% and 45% of the cohort respectively. Nutrition risk and malnutrition were identified in 71% and 43% of the paediatric cohort respectively. The sensitivity and specificity of the screening tools were: 62% and 99% for the MST compared to the SGA; 89% and 42% for the PNST compared to the SGNA. This study provides a stepping stone to the potential use of evidence-based nutrition screening and assessment tools in Vietnamese language within the adult and paediatric Vietnamese acute care setting. Further work is required into integrating a complete nutrition care pathway within the acute care setting in Vietnamese hospitals.
Resumo:
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.
Resumo:
Information and communication technology (ICT) has created opportunities for students' online interaction in higher education throughout the world. Limited research has been done in this area in Saudi Arabia. This study investigated university students' engagement and perceptions of online collaborative learning using Social Learning Tools (SLTs). In addition, it explored the quality of knowledge construction that occurred in this environment. A mixed methods case study approach was adopted, and the data was gathered from undergraduate students (n=43) who were enrolled in a 15-week course at a Saudi university. The results showed that while the students had positive perceptions towards SLTs and their engagement, data gathered from their work also showed little evidence of high levels of knowledge construction.
Resumo:
This paper presents a statistical aircraft trajectory clustering approach aimed at discriminating between typical manned and expected unmanned traffic patterns. First, a resampled version of each trajectory is modelled using a mixture of Von Mises distributions (circular statistics). Second, the remodelled trajectories are globally aligned using tools from bioinformatics. Third, the alignment scores are used to cluster the trajectories using an iterative k-medoids approach and an appropriate distance function. The approach is then evaluated using synthetically generated unmanned aircraft flights combined with real air traffic position reports taken over a sector of Northern Queensland, Australia. Results suggest that the technique is useful in distinguishing between expected unmanned and manned aircraft traffic behaviour, as well as identifying some common conventional air traffic patterns.