892 resultados para SAMPLE SIZE


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maternal deaths have been a critical issue for women living in rural and remote areas. The need to travel long distances, the shortage of primary care providers such as physicians, specialists and nurses, and the closing of small hospitals have been problems identified in many rural areas. Some research work has been undertaken and a few techniques have been developed to remotely measure the physiological condition of pregnant women through sophisticated ultrasound equipment. There are numerous ways to reduce maternal deaths, and an important step is to select the right approaches to achieving this reduction. One such approach is the provision of decision support systems in rural and remote areas. Decision support systems (DSSs) have already shown a great potential in many health fields. This thesis proposes an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s á and Classification Tree were incorporated in the iDSS. The decision support system was developed with significant variables such as: Place of residence, Seeing the same doctor, Education, Tetanus injection, Baby weight, Previous baby born, Place of birth, Assisted delivery, Pregnancy parity, Doctor visits and Occupation. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcomes of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women. On conditional system was sent and validated by the gynaecologist. Another outcome of ingenious decision support system was to provide better pregnancy health awareness and reduce long distance travel, especially for women in rural areas. The proposed system has qualities such as usefulness, accuracy and accessibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

After more than 25 years of published investigation, including randomized controlled trials, the role of omega-3 polyunsaturated fatty acids in the treatment of kidney disease remains unclear. In vitro and in vivo experimental studies support the efficacy of omega-3 polyunsaturated fatty acids on inflammatory pathways involved with the progression of kidney disease. Clinical investigations have focused predominantly on immunoglobulin A (IgA) nephropathy. More recently, lupus nephritis, polycystic kidney disease, and other glomerular diseases have been investigated. Clinical trials have shown conflicting results for the efficacy of omega-3 polyunsaturated fatty acids in IgA nephropathy, which may relate to varying doses, proportions of eicosapentaenoic acid and docosahexaenoic acid, duration of therapy, and sample size of the study populations. Meta-analyses of clinical trials using omega-3 polyunsaturated fatty acids in IgA nephropathy have been limited by the quality of available studies. However, guidelines suggest that omega-3 polyunsaturated fatty acids should be considered in progressive IgA nephropathy. Omega-3 polyunsaturated fatty acids decrease blood pressure, a known accelerant of kidney disease progression. Well-designed, adequately powered, randomized, controlled clinical trials are required to further investigate the potential benefits of omega-3 polyunsaturated fatty acids on the progression of kidney disease and patient survival.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses development of an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s α and Classification Tree were incorporated in the iDSS. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcome of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES: To investigate the effect of Baby-Friendly Hospital Initiative (BFHI) accreditation and hospital care practices on breastfeeding rates at 1 and 4 months. METHODS: All women who birthed in Queensland, Australia, from February 1 to May 31, 2010, received a survey 4 months postpartum. Maternal, infant, and hospital characteristics; pregnancy and birth complications; and infant feeding outcomes were measured. RESULTS: Sample size was 6752 women. Breastfeeding initiation rates were high (96%) and similar in BFHI-accredited and nonaccredited hospitals. After adjustment for significant maternal, infant, clinical, and hospital variables, women who birthed in BFHI-accredited hospitals had significantly lower odds of breastfeeding at 1 month (adjusted odds ratio 0.72, 95% confidence interval 0.58–0.90) than those who birthed in non–BFHI-accredited hospitals. BFHI accreditation did not affect the odds of breastfeeding at 4 months or exclusive breastfeeding at 1 or 4 months. Four in-hospital practices (early skin-to-skin contact, attempted breastfeeding within the first hour, rooming-in, and no in-hospital supplementation) were experienced by 70% to 80% of mothers, with 50.3% experiencing all 4. Women who experienced all 4 hospital practices had higher odds of breastfeeding at 1 month (adjusted odds ratio 2.20, 95% confidence interval 1.78–2.71) and 4 months (adjusted odds ratio 2.93, 95% confidence interval 2.40–3.60) than women who experienced fewer than 4. CONCLUSIONS: When breastfeeding-initiation rates are high and evidence-based practices that support breastfeeding are common within the hospital environment, BFHI accreditation per se has little effect on both exclusive or any breastfeeding rates.C

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for a re-planning CT during radiotherapy (RT) in a subset of patients. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive RT trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent RT were reviewed. Patients were analysed based on maximum size of the dominant node with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Sample size did not allow statistical analysis to detect a significant difference or exclusion of a lack of difference between the 3 groups. Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive RT study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for a re-planning CT during radiotherapy (RT) in a subset of patients. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive RT trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent RT were reviewed. Patients were analysed based on maximum size of the dominant node with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Sample size did not allow statistical analysis to detect a significant difference or exclusion of a lack of difference between the 3 groups. Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive RT study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this article is to examine the role of the alignment between technological innovation effectiveness and operational effectiveness after the implementation of enterprise information systems, and the impact of this alignment on the improvement in operational performance. Confirmatory factor analysis was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. The findings from this research suggest that the dimensions stemming from technological innovation effectiveness such as system quality, information quality, service quality, user satisfaction and the performance objectives stemming from operational effectiveness such as cost, quality, reliability, flexibility and speed are important and significantly well-correlated factors. These factors promote the alignment between technological innovation effectiveness and operational effectiveness and should be the focus for managers in achieving effective implementation of technological innovations. In addition, there is a significant and direct influence of this alignment on the improvement of operational performance. The principal limitation of this study is that the findings are based on investigation of small sample size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Parents are encouraged to read with their children from an early age because shared book reading helps children to develop their language and early literacy skills. A pragmatic Randomised Controlled Trial (RCT) research design was adopted to investigate the influence of two forms of a shared reading intervention (Dialogic Reading and Dialogic Reading with the addition of Print Referencing) on children’s language and literacy skills. Dialogic reading is a validated shared reading intervention that has been shown to improve children’s oral language skills prior to formal schooling (Whitehurst & Lonigan, 1998). Print referencing is another form of shared reading intervention that has the potential to have effects on children’s print knowledge as they begin school (Justice & Ezell, 2002). However, training parents to use print referencing strategies at home has not been researched extensively although research findings indicate its effectiveness when used by teachers in the early years of school. Eighty parents of Preparatory year children from three Catholic schools in low income areas in the outer suburbs of a metropolitan city were trained to deliver specific shared reading strategies in an eight-week home intervention. Parents read eight books to their children across the period of the intervention. Each book was requested to be read at least three times a week. There were 42 boys and 38 girls ranging in age from 4.92 years to 6.25 years (M=5.53, SD=0.33) in the sample. The families were randomly assigned to three groups: Dialogic Reading (DR); Dialogic Reading with the addition of Print Referencing (DR + PR); and a Control group. Six measures were used to assess children’s language skills at pre and post, and follow-up (three months after the intervention). These measures assessed oral language (receptive and expressive vocabulary), phonological awareness skills (rhyme, word completion), alphabet knowledge, and concepts about print. Results of the intervention showed that there were significant differences from pre to post between the two intervention groups and the control group on three measures: expressive vocabulary, rhyme, and concepts about print. The shared reading strategies delivered by parents of the dialogic reading, and dialogic reading with the addition of print referencing, showed promising results to develop children’s oral language skills in terms of expressive vocabulary and rhyme, as well as understanding of the concepts about print. At follow-up, when the children entered Year 1, the two intervention groups (DR and DR + PR) group had significantly maintained their knowledge of concepts about prints when compared with the control group. Overall, the findings from this intervention study did not show that dialogic reading with the addition of print referencing had stronger effects on children’s early literacy skills than dialogic reading alone. The research also explored if pre-existing family factors impacted on the outcomes of the intervention from pre to post. The relationships between maternal education and home reading practices prior to intervention and child outcomes at post were considered. However, there were no significant effects of maternal education and home literacy activities on child outcomes at post. Additionally, there were no significant effects for the level of compliance of parents with the intervention program in terms of regular weekly reading to children during the intervention period on child outcomes at post. These non-significant findings are attributed to the lack of variability in the recruited sample. Parents participating in the intervention had high levels of education, although they were recruited from schools in low socio-economic areas; parents were already highly engaged in home literacy activities at recruitment; and the parents were highly compliant in reading regularly to their child during the intervention. Findings of the current study did show that training in shared reading strategies enhanced children’s early language and literacy skills. Both dialogic reading and dialogic reading with the addition of print referencing improved children’s expressive vocabulary, rhyme, and concepts about print at post intervention. Further research is needed to identify how, and if, print referencing strategies used by parents at home can be effective over and above the use of dialogic reading strategies. In this research, limitations of sample size and the nature of the intervention to use print referencing strategies at home may have restricted the opportunities for this research study to find more effects on children’s emergent literacy skills or for the effectiveness of combining dialogic reading with print referencing strategies. However, these results did indicate that there was value in teaching parents to implement shared reading strategies at home in order to improve early literacy skills as children begin formal schooling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The health of an individual is determined by the interaction of genetic and individual factors with wider social and environmental elements. Public health approaches to improving the health of disadvantaged populations will be most effective if they optimise influences at each of these levels, particularly in the early part of the life course. In order to better ascertain the relative contribution of these multi-level determinants there is a need for robust studies, longitudinal and prospective in nature, that examine individual, familial, social and environmental exposures. This paper describes the study background and methods, as it has been implemented in an Australian birth cohort study, Environments for Healthy Living (EFHL): The Griffith Study of Population Health. EFHL is a prospective, multi-level, multi-year longitudinal birth cohort study, designed to collect information from before birth through to adulthood across a spectrum of eco-epidemiological factors, including genetic material from cord-blood samples at birth, individual and familial factors, to spatial data on the living environment. EFHL commenced the pilot phase of recruitment in 2006 and open recruitment in 2007, with a target sample size of 4000 mother/infant dyads. Detailed information on each participant is obtained at birth, 12-months, 3-years, 5-years and subsequent three to five yearly intervals. The findings of this research will provide detailed evidence on the relative contribution of multi-level determinants of health, which can be used to inform social policy and intervention strategies that will facilitate healthy behaviours and choices across sub-populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human memory is a complex neurocognitive process. By combining psychological and molecular genetics expertise, we examined the APOE ε4 allele, a known risk factor for Alzheimer's disease, and the COMT Val 158 polymorphism, previously implicated in schizophrenia, for association with lowered memory functioning in healthy adults. To assess memory type we used a range of memory tests of both retrospective and prospective memory. Genotypes were determined using RFLP analysis and compared with mean memory scores using univariate ANOVAs. Despite a modest sample size (n=197), our study found a significant effect of the APOE ε4 polymorphism in prospective memory. Supporting our hypothesis, a significant difference was demonstrated between genotype groups for means of the Comprehensive Assessment of Prospective Memory total score (p=0.036; ε4 alleles=1.99; all other alleles=1.86). In addition, we demonstrate a significant interactive effect between the APOE ε4 and COMT polymorphisms in semantic memory. This is the first study to investigate both APOE and COMT genotypes in relation to memory in non-pathological adults and provides important information regarding the effect of genetic determinants on human memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Illumina's Infinium SNP BeadChips are extensively used in both small and large-scale genetic studies. A fundamental step in any analysis is the processing of raw allele A and allele B intensities from each SNP into genotype calls (AA, AB, BB). Various algorithms which make use of different statistical models are available for this task. We compare four methods (GenCall, Illuminus, GenoSNP and CRLMM) on data where the true genotypes are known in advance and data from a recently published genome-wide association study. Results In general, differences in accuracy are relatively small between the methods evaluated, although CRLMM and GenoSNP were found to consistently outperform GenCall. The performance of Illuminus is heavily dependent on sample size, with lower no call rates and improved accuracy as the number of samples available increases. For X chromosome SNPs, methods with sex-dependent models (Illuminus, CRLMM) perform better than methods which ignore gender information (GenCall, GenoSNP). We observe that CRLMM and GenoSNP are more accurate at calling SNPs with low minor allele frequency than GenCall or Illuminus. The sample quality metrics from each of the four methods were found to have a high level of agreement at flagging samples with unusual signal characteristics. Conclusions CRLMM, GenoSNP and GenCall can be applied with confidence in studies of any size, as their performance was shown to be invariant to the number of samples available. Illuminus on the other hand requires a larger number of samples to achieve comparable levels of accuracy and its use in smaller studies (50 or fewer individuals) is not recommended.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In South and Southeast Asia, postharvest loss causes material waste of up to 66% in fruits and vegetables, 30% in oilseeds and pulses, and 49% in roots and tubers. The efficiency of postharvest equipment directly affects industrial-scale food production. To enhance current processing methods and devices, it is essential to analyze the responses of food materials under loading operations. Food materials undergo different types of mechanical loading during postharvest and processing stages. Therefore, it is important to determine the properties of these materials under different types of loads, such as tensile, compression, and indentation. This study presents a comprehensive analysis of the available literature on the tensile properties of different food samples. The aim of this review was to categorize the available methods of tensile testing for agricultural crops and food materials to investigate an appropriate sample size and tensile test method. The results were then applied to perform tensile tests on pumpkin flesh and peel samples, in particular on arc-sided samples at a constant loading rate of 20 mm min-1. The results showed the maximum tensile stress of pumpkin flesh and peel samples to be 0.535 and 1.45 MPa, respectively. The elastic modulus of the flesh and peel samples was 6.82 and 25.2 MPa, respectively, while the failure modulus values were 14.51 and 30.88 MPa, respectively. The results of the tensile tests were also used to develop a finite element model of mechanical peeling of tough-skinned vegetables. However, to study the effects of deformation rate, moisture content, and texture of the tissue on the tensile responses of food materials, more investigation needs to be done in the future.