25 resultados para Cibles: (HIP 78530, [PGZ2001] J161031.9-191305, GSC 06214-00210, 1RXS J160929.1-210524)
em Queensland University of Technology - ePrints Archive
Resumo:
Problem addressed Wrist-worn accelerometers are associated with greater compliance. However, validated algorithms for predicting activity type from wrist-worn accelerometer data are lacking. This study compared the activity recognition rates of an activity classifier trained on acceleration signal collected on the wrist and hip. Methodology 52 children and adolescents (mean age 13.7 +/- 3.1 year) completed 12 activity trials that were categorized into 7 activity classes: lying down, sitting, standing, walking, running, basketball, and dancing. During each trial, participants wore an ActiGraph GT3X+ tri-axial accelerometer on the right hip and the non-dominant wrist. Features were extracted from 10-s windows and inputted into a regularized logistic regression model using R (Glmnet + L1). Results Classification accuracy for the hip and wrist was 91.0% +/- 3.1% and 88.4% +/- 3.0%, respectively. The hip model exhibited excellent classification accuracy for sitting (91.3%), standing (95.8%), walking (95.8%), and running (96.8%); acceptable classification accuracy for lying down (88.3%) and basketball (81.9%); and modest accuracy for dance (64.1%). The wrist model exhibited excellent classification accuracy for sitting (93.0%), standing (91.7%), and walking (95.8%); acceptable classification accuracy for basketball (86.0%); and modest accuracy for running (78.8%), lying down (74.6%) and dance (69.4%). Potential Impact Both the hip and wrist algorithms achieved acceptable classification accuracy, allowing researchers to use either placement for activity recognition.
Resumo:
Focusing on the use of language is a crucial strategy in good mathematics teaching and a teacher’s guidance can assist students to master the language of mathematics. This article discusses the statements with reference to recent year 7 and 9 NAPLAN numeracy tests. It draws the readers’ attention to the complexities of language in the field of mathematics. Although this article refers to NAPLAN numeracy tests it also offers advice about good teaching practice.
Resumo:
Loss of the short arm of chromosome 1 is frequently observed in many tumor types, including melanoma. We recently localized a third melanoma susceptibility locus to chromosome band 1p22. Critical recombinants in linked families localized the gene to a 15-Mb region between D1S430 and D1S2664. To map the locus more finely we have performed studies to assess allelic loss across the region in a panel of melanomas from 1p22-linked families, sporadic melanomas, and melanoma cell lines. Eighty percent of familial melanomas exhibited loss of heterozygosity (LOH) within the region, with a smallest region of overlapping deletions (SRO) of 9 Mb between D1S207 and D1S435. This high frequency of LOH makes it very likely that the susceptibility locus is a tumor suppressor. In sporadic tumors, four SROs were defined. SRO1 and SRO2 map within the critical recombinant and familial tumor region, indicating that one or the other is likely to harbor the susceptibility gene. However, SRO3 may also be significant because it overlaps with the markers with the highest 2-point LOD score (D1S2776), part of the linkage recombinant region, and the critical region defined in mesothelioma. The candidate genes PRKCL2 and GTF2B, within SRO2, and TGFBR3, CDC7, and EVI5, in a broad region encompassing SRO3, were screened in 1p22-linked melanoma kindreds, but no coding mutations were detected. Allelic loss in melanoma cell lines was significantly less frequent than in fresh tumors, indicating that this gene may not be involved late in progression, such as in overriding cellular senescence, necessary for the propagation of melanoma cells in culture.
Resumo:
Purpose: This two-part research project was undertaken as part of the planning process by Queensland Health (QH), Cancer Screening Services Unit (CSSU), Queensland Bowel Cancer Screening Program (QBCSP), in partnership with the National Bowel Cancer Screening Program (NBCSP), to prepare for the implementation of the NBCSP in public sector colonoscopy services in QLD in late 2006. There was no prior information available on the quality of colonoscopy services in Queensland (QLD) and no prior studies that assessed the quality of colonoscopy training in Australia. Furthermore, the NBCSP was introduced without extra funding for colonoscopy service improvement or provision for increases in colonoscopic capacity resulting from the introduction of the NBCSP. The main purpose of the research was to record baseline data on colonoscopy referral and practice in QLD and current training in colonoscopy Australia-wide. It was undertaken from a quality improvement perspective. Implementation of the NBCSP requires that all aspects of the screening pathway, in particular colonoscopy services for the assessment of positive Faecal Occult Blood Tests (FOBTs), will be effective, efficient, equitable and evidence-based. This study examined two important aspects of the continuous quality improvement framework for the NBCSP as they relate to colonoscopy services: (1) evidence-based practice, and (2) quality of colonoscopy training. The Principal Investigator was employed as Senior Project Officer (Training) in the QBCSP during the conduct of this research project. Recommendations from this research have been used to inform the development and implementation of quality improvement initiatives for provision of colonoscopy in the NBCSP, its QLD counterpart the QBCSP and colonoscopy services in QLD, in general. Methods – Part 1 Chart audit of evidence-based practice: The research was undertaken in two parts from 2005-2007. The first part of this research comprised a retrospective chart audit of 1484 colonoscopy records (some 13% of all colonoscopies conducted in public sector facilities in the year 2005) in three QLD colonoscopy services. Whilst some 70% of colonoscopies are currently conducted in the private sector, only public sector colonoscopy facilities provided colonoscopies under the NBCSP. The aim of this study was to compare colonoscopy referral and practice with explicit criteria derived from the National Health & Medical Research Council (NHMRC) (1999) Clinical Practice Guidelines for the Prevention, Early Detection and Management of Colorectal Cancer, and describe the nature of variance with the guidelines. Symptomatic presentations were the most common indication for colonoscopy (60.9%). These comprised per rectal bleeding (31.0%), change of bowel habit (22.1%), abdominal pain (19.6%), iron deficiency anaemia (16.2%), inflammatory bowel disease (8.9%) and other symptoms (11.4%). Surveillance and follow-up colonoscopies accounted for approximately one-third of the remaining colonoscopy workload across sites. Gastroenterologists (GEs) performed relatively more colonoscopies per annum (59.9%) compared to general surgeons (GS) (24.1%), colorectal surgeons (CRS) (9.4%) and general physicians (GPs) (6.5%). Guideline compliance varied with the designation of the colonoscopist. Compliance was lower for CRS (62.9%) compared to GPs (76.0%), GEs (75.0%), GSs (70.9%, p<0.05). Compliance with guideline recommendations for colonoscopic surveillance for family history of colorectal cancer (23.9%), polyps (37.0%) and a past history of bowel cancer (42.7%), was by comparison significantly lower than for symptomatic presentations (94.4%), (p<0.001). Variation with guideline recommendations occurred more frequently for polyp surveillance (earlier than guidelines recommend, 47.9%) and follow-up for past history of bowel cancer (later than recommended, 61.7%, p<0.001). Bowel cancer cases detected at colonoscopy comprised 3.6% of all audited colonoscopies. Incomplete colonoscopies occurred in 4.3% of audited colonoscopies and were more common among women (76.6%). For all colonoscopies audited, the rate of incomplete colonoscopies for GEs was 1.6% (CI 0.9-2.6), GPs 2.0% (CI 0.6-7.2), GS 7.0% (CI 4.8-10.1) and CRS 16.4% (CI 11.2-23.5). 18.6% (n=55) of patients with a documented family history of bowel cancer had colonoscopy performed against guidelines recommendations (for general (category 1) population risk, for reasons of patient request or family history of polyps, rather than for high risk status for colorectal cancer). In general, family history was inadequately documented and subsequently applied to colonoscopy referral and practice. Methods - Part 2 Surveys of quality of colonoscopy training: The second part of the research consisted of Australia-wide anonymous, self-completed surveys of colonoscopy trainers and their trainees to ascertain their opinions on the current apprenticeship model of colonoscopy in Australia and to identify any training needs. Overall, 127 surveys were received from colonoscopy trainers (estimated response rate 30.2%). Approximately 50% of trainers agreed and 27% disagreed that current numbers of training places were adequate to maintain a skilled colonoscopy workforce in preparation for the NBCSP. Approximately 70% of trainers also supported UK-style colonoscopy training within dedicated accredited training centres using a variety of training approaches including simulation. A collaborative approach with the private sector was seen as beneficial by 65% of trainers. Non-gastroenterologists (non-GEs) were more likely than GEs to be of the opinion that simulators are beneficial for colonoscopy training (χ2-test = 5.55, P = 0.026). Approximately 60% of trainers considered that the current requirements for recognition of training in colonoscopy could be insufficient for trainees to gain competence and 80% of those indicated that ≥ 200 colonoscopies were needed. GEs (73.4%) were more likely than non-GEs (36.2%) to be of the opinion that the Conjoint Committee standard is insufficient to gain competence in colonoscopy (χ2-test = 16.97, P = 0.0001). The majority of trainers did not support training either nurses (73%) or GPs in colonoscopy (71%). Only 81 (estimated response rate 17.9%) surveys were received from GS trainees (72.1%), GE trainees (26.3%) and GP trainees (1.2%). The majority were males (75.9%), with a median age 32 years and who had trained in New South Wales (41.0%) or Victoria (30%). Overall, two-thirds (60.8%) of trainees indicated that they deemed the Conjoint Committee standard sufficient to gain competency in colonoscopy. Between specialties, 75.4% of GS trainees indicated that the Conjoint Committee standard for recognition of colonoscopy was sufficient to gain competence in colonoscopy compared to only 38.5% of GE trainees. Measures of competency assessed and recorded by trainees in logbooks centred mainly on caecal intubation (94.7-100%), complications (78.9-100%) and withdrawal time (51-76.2%). Trainees described limited access to colonoscopy training lists due to the time inefficiency of the apprenticeship model and perceived monopolisation of these by GEs and their trainees. Improvements to the current training model suggested by trainees included: more use of simulation, training tools, a United Kingdom (UK)-style training course, concentration on quality indicators, increased access to training lists, accreditation of trainers and interdisciplinary colonoscopy training. Implications for the NBCSP/QBCSP: The introduction of the NBCSP/QBCSP necessitates higher quality colonoscopy services if it is to achieve its ultimate goal of decreasing the incidence of morbidity and mortality associated with bowel cancer in Australia. This will be achieved under a new paradigm for colonoscopy training and implementation of evidence-based practice across the screening pathway and specifically targeting areas highlighted in this thesis. Recommendations for improvement of NBCSP/QBCSP effectiveness and efficiency include the following: 1. Implementation of NBCSP and QBCSP health promotion activities that target men, in particular, to increase FOBT screening uptake. 2. Improved colonoscopy training for trainees and refresher courses or retraining for existing proceduralists to improve completion rates (especially for female NBCSP/QBCSP participants), and polyp and adenoma detection and removal, including newer techniques to detect flat and depressed lesions. 3. Introduction of colonoscopy training initiatives for trainees that are aligned with NBCSP/QBCSP colonoscopy quality indicators, including measurement of training outcomes using objective quality indicators such as caecal intubation, withdrawal time, and adenoma detection rate. 4. Introduction of standardised, interdisciplinary colonoscopy training to reduce apparent differences between specialties with regard to compliance with guideline recommendations, completion rates, and quality of polypectomy. 5. Improved quality of colonoscopy training by adoption of a UK-style training program with centres of excellence, incorporating newer, more objective assessment methods, use of a variety of training tools such as simulation and rotations of trainees between metropolitan, rural, and public and private sector training facilities. 6. Incorporation of NHMRC guidelines into colonoscopy information systems to improve documentation, provide guideline recommendations at the point of care, use of gastroenterology nurse coordinators to facilitate compliance with guidelines and provision of guideline-based colonoscopy referral letters for GPs. 7. Provision of information and education about the NBCSP/QBCSP, bowel cancer risk factors, including family history and polyp surveillance guidelines, for participants, GPs and proceduralists. 8. Improved referral of NBCSP/QBCSP participants found to have a high-risk family history of bowel cancer to appropriate genetics services.
Resumo:
Introduction: Emergency prehospital medical care providers are frontline health workers during emergencies. However, little is known about their attitudes, perceptions, and likely behaviors during emergency conditions. Understanding these attitudes and behaviors is crucial to mitigating the psychological and operational effects of biohazard events such as pandemic influenza, and will support the business continuity of essential prehospital services. ----- ----- Problem: This study was designed to investigate the association between knowledge and attitudes regarding avian influenza on likely behavioral responses of Australian emergency prehospital medical care providers in pandemic conditions. ----- ----- Methods: Using a reply-paid postal questionnaire, the knowledge and attitudes of a national, stratified, random sample of the Australian emergency prehospital medical care workforce in relation to pandemic influenza were investigated. In addition to knowledge and attitudes, there were five measures of anticipated behavior during pandemic conditions: (1) preparedness to wear personal protective equipment (PPE); (2) preparedness to change role; (3) willingness to work; and likely refusal to work with colleagues who were exposed to (4) known and (5) suspected influenza. Multiple logistic regression models were constructed to determine the independent predictors of each of the anticipated behaviors, while controlling for other relevant variables. ----- ----- Results: Almost half (43%) of the 725 emergency prehospital medical care personnel who responded to the survey indicated that they would be unwilling to work during pandemic conditions; one-quarter indicated that they would not be prepared to work in PPE; and one-third would refuse to work with a colleague exposed to a known case of pandemic human influenza. Willingness to work during a pandemic (OR = 1.41; 95% CI = 1.0–1.9), and willingness to change roles (OR = 1.44; 95% CI = 1.04–2.0) significantly increased with adequate knowledge about infectious agents generally. Generally, refusal to work with exposed (OR = 0.48; 95% CI = 0.3–0.7) or potentially exposed (OR = 0.43; 95% CI = 0.3–0.6) colleagues significantly decreased with adequate knowledge about infectious agents. Confidence in the employer’s capacity to respond appropriately to a pandemic significantly increased employee willingness to work (OR = 2.83; 95% CI = 1.9–4.1); willingness to change roles during a pandemic (OR = 1.52; 95% CI = 1.1–2.1); preparedness to wear PPE (OR = 1.68; 95% CI = 1.1–2.5); and significantly decreased the likelihood of refusing to work with colleagues exposed to (suspected) influenza (OR = 0.59; 95% CI = 0.4–0.9). ----- ----- Conclusions:These findings indicate that education and training alone will not adequately prepare the emergency prehospital medical workforce for a pandemic. It is crucial to address the concerns of ambulance personnel and the perceived concerns of their relationship with partners in order to maintain an effective prehospital emergency medical care service during pandemic conditions.
Resumo:
Unlicensed driving remains a serious problem in many jurisdictions, and while it does not play a direct causative role in road crashes, it undermines driver licensing systems and is linked to other high risk driving behaviours. Roadside licence check surveys represent the most direct means of estimating the prevalence of unlicensed driving. The current study involved the Queensland Police Service (QPS) checking the licences of 3,112 drivers intercepted at random breath testing operations across Queensland between February and April 2010. Data was matched with official licensing records from Transport and Main Roads (TMR) via the drivers’ licence number. In total, 2,914 (93.6%) records were matched, with the majority of the 198 unmatched cases representing international or interstate licence holders (n = 156), leaving 42 unknown cases. Among the drivers intercepted at the roadside, 20 (0.6%) were identified as being unlicensed at the time, while a further 11 (0.4%) were driving unaccompanied on a Learner Licence. However, the examination of TMR licensing records revealed that an additional 9 individuals (0.3%) had a current licence sanction but were not identified as unlicensed by QPS. Thus, in total 29 of the drivers were unlicensed at the time, representing 0.9% of all the drivers intercepted and 1% of those for whom their licence records could be checked. This is considerably lower than the involvement of unlicensed drivers in fatal and serious injury crashes in Queensland, which is consistent with other research confirming the increased crash risk of the group. However, the number of unmatched records suggest that it is possible the on-road survey may have under-estimated the prevalence of unlicensed driving, so further development of the survey method is recommended.
Resumo:
Background: We have previously shown the high prevalence of oral anti-human papillomavirus type 16 (HPV-16) antibodies in women with HPV-associated cervical neoplasia. It was postulated that the HPV antibodies were initiated after HPV antigenic stimulation at the cervix via the common mucosal immune system. The present study aimed to further evaluate the effectiveness of oral fluid testing for detecting the mucosal humoral response to HPV infection and to advance our limited understanding of the immune response to HPV. Methods: The prevalence of oral HPV infection and oral antibodies to HPV types 16, 18 and 11 was determined in a normal, healthy population of children, adolescents and adults, both male and female, attending a dental clinic. HPV types in buccal cells were determined by DNA sequencing. Oral fluid was collected from the gingival crevice of the mouth by the OraSure method. HPV-16, HPV-18 and HPV-11 antibodies in oral fluid were detected by virus-like particle-based enzyme-linked immunosorbent assay. As a reference group 44 women with cervical neoplasia were included in the study. Results: Oral HPV infection was h ighest in children (9/114, 7.9%), followed by adolescents (4/78, 5.1%), and lowest in normal adults (4/116, 3.5%). The predominant HPV type found was HPV-13 (7/22, 31.8%) followed by HPV-32 (5/22, 22.7%). The prevalence of oral antibodies to HPV-16, HPV-18 and HPV-11 was low in children and increased substantially in adolescents and normal adults. Oral HPV-16 IgA was significantly more prevalent in women with cervical neoplasia (30/44, 68.2%) than the women from the dental clinic (18/69, 26.1% P = 0.0001). Significantly more adult men than women displayed oral HPV-16 IgA (30/47 compared with 18/69, OR 5.0, 95% CI 2.09-12.1, P < 0.001) and HPV-18 IgA (17/47 compared with 13/69, OR 2.4, 95% CI 0.97-6.2, P = 0.04). Conclusion: The increased prevalence of oral HPV antibodies in adolescent individuals compared with children was attributed to the onset of sexual activity. The increased prevalence of oral anti-HPV IgA in men compared with women was noteworthy considering reportedly fewer men than women make serum antibodies, and warrants further investigation. © 2006 Marais et al; licensee BioMed Central Ltd.
Resumo:
Nutritional status in people with Parkinson’s disease (PD) has previously been assessed in a number of ways including BMI, % weight loss and the Mini-Nutritional Assessment(MNA). The symptoms of the disease and the side effects of medication used to manage them result in a number of nutrition impact symptoms that can negatively influence intake. These include chewing and swallowing difficulties, lack of appetite, nausea, and taste and smell changes, among others. Community-dwelling people with PD, aged >18 years, were recruited (n=97, 61 M, 36 F). The Patient-Generated Subjective Global Assessment(PG-SGA) and (MNA) were used to assess nutritional status. Weight, height, mid-arm circumference(MAC) and calf circumference were measured. Based on SGA, 16(16.5%) were moderately malnourished (SGA B) while none were severely malnourished (SGA C). The MNA identified 2(2.0%) as malnourished and 22(22.7%) as at risk of malnutrition. Mean MNA scores were different between the three groups,F(2,37)=7.30,p<.05 but not different between SGA B (21.0(2.9)) and MNA at risk (21.8(1.4)) participants. MAC and calf circumference were also different between the three groups,F(2,37)=5.51,p<.05 and F(2,37)=15.33,p<.05 but not between the SGA B (26.2(4.2), 33.3(2.8)) and MNA at risk (28.4(5.6), 36.4(4.7)) participants. The MNA results are similar to other PD studies using MNA where prevalence of malnutrition was between 0-2% with 20-33% at risk of malnutrition. In this population, the PG-SGA may be more sensitive to assessing malnutrition where nutrition impact symptoms influence intake. With society’s increasing body size, it might also be more appropriate as it does not rely on MAC and calf circumference measures.
Resumo:
Purpose: To determine whether neuroretinal function differs in healthy persons with and without common risk gene variants for age- related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare those findings in persons with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) (VERIS, Redwood City, CA,) in 32 participants (22 healthy persons with no clinical signs of AMD and 10 early AMD patients). The 22 healthy participants with no AMD were risk genotypes for either the CFH (rs380390) and/or ARMS2 (rs10490920). We used a slow flash mfERG paradigm (3 inserted frames) and a 103 hexagon stimulus array. Recordings were made with DTL electrodes; fixation and eye movements were monitored online. Trough N1 to peak P1 (N1P1) response densities and P1-implicit times (IT) were analysed in 5 concentric rings. Results: N1P1 response densities (mean ± SD) for concentric rings 1-3 were on average significantly higher in at-risk genotypes (ring 1: 17.97 nV/deg2 ± 1.9, ring 2: 11.7 nV/deg2 ±1.3, ring 3: 8.7 nV/deg2 ± 0.7) compared to those without risk (ring 1: 13.7 nV/deg2 ± 1.9, ring 2: 9.2 nV/deg2 ±0.8, ring 3: 7.3 nV/deg2 ± 1.1) and compared to persons with early AMD (ring 1: 15.3 nV/deg2 ± 4.8, ring 2: 9.1 nV/deg2 ±2.3, ring 3 nV/deg2: 7.3± 1.3) (p<0.5). The group implicit times, P1-ITs for ring 1 were on average delayed in the early AMD patients (36.4 ms ± 1.0) compared to healthy participants with (35.1 ms ± 1.1) or without risk genotypes (34.8 ms ±1.3), although these differences were not significant. Conclusion: Neuroretinal function in persons with normal fundi can be differentiated into subgroups based on their genetics. Increased neuroretinal activity in persons who carry AMD risk genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Assessment of neuroretinal function in healthy persons genetically susceptible to AMD may be a useful early biomarker before there is clinical manifestation of AMD.
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Diet Induced Thermogenesis (DIT) is the energy expended consequent to meal consumption, and reflects the energy required for the processing and digestion of food consumed throughout each day. Although DIT is the total energy expended across a day in digestive processes to a number of meals, most studies measure thermogenesis in response to a single meal (Meal Induced Thermogenesis: MIT) as a representation of an individual’s thermogenic response to acute food ingestion. As a component of energy expenditure, DIT may have a contributing role in weight gain and weight loss. While the evidence is inconsistent, research has tended to reveal a suppressed MIT response in obese compared to lean individuals, which identifies individuals with an efficient storage of food energy, hence a greater tendency for weight gain. Appetite is another factor regulating body weight through its influence on energy intake. Preliminary research has shown a potential link between MIT and postprandial appetite as both are responses to food ingestion and have a similar response dependent upon the macronutrient content of food. There is a growing interest in understanding how both MIT and appetite are modified with changes in diet, activity levels and body size. However, the findings from MIT research have been highly inconsistent, potentially due to the vastly divergent protocols used for its measurement. Therefore, the main theme of this thesis was firstly, to address some of the methodological issues associated with measuring MIT. Additionally this thesis aimed to measure postprandial appetite simultaneously to MIT to test for any relationships between these meal-induced variables and to assess changes that occur in MIT and postprandial appetite during periods of energy restriction (ER) and following weight loss. Two separate studies were conducted to achieve these aims. Based on the increasing prevalence of obesity, it is important to develop accurate methodologies for measuring the components potentially contributing to its development and to understand the variability within these variables. Therefore, the aim of Study One was to establish a protocol for measuring the thermogenic response to a single test meal (MIT), as a representation of DIT across a day. This was done by determining the reproducibility of MIT with a continuous measurement protocol and determining the effect of measurement duration. The benefit of a fixed resting metabolic rate (RMR), which is a single measure of RMR used to calculate each subsequent measure of MIT, compared to separate baseline RMRs, which are separate measures of RMR measured immediately prior to each MIT test meal to calculate each measure of MIT, was also assessed to determine the method with greater reproducibility. Subsidiary aims were to measure postprandial appetite simultaneously to MIT, to determine its reproducibility between days and to assess potential relationships between these two variables. Ten healthy individuals (5 males, 5 females, age = 30.2 ± 7.6 years, BMI = 22.3 ± 1.9 kg/m2, %Fat Mass = 27.6 ± 5.9%) undertook three testing sessions within a 1-4 week time period. During the first visit, participants had their body composition measured using DXA for descriptive purposes, then had an initial 30-minute measure of RMR to familiarise them with the testing and to be used as a fixed baseline for calculating MIT. During the second and third testing sessions, MIT was measured. Measures of RMR and MIT were undertaken using a metabolic cart with a ventilated hood to measure energy expenditure via indirect calorimetry with participants in a semi-reclined position. The procedure on each MIT test day was: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard 576 kcal breakfast (54.3% CHO, 14.3% PRO, 31.4% FAT), comprising muesli, milk toast, butter, jam and juice, and 3) six hours of measuring MIT with two, ten-minute breaks at 3 and 4.5 hours for participants to visit the bathroom. On the MIT test days, pre and post breakfast then at 45-minute intervals, participants rated their subjective appetite, alertness and comfort on visual analogue scales (VAS). Prior to each test, participants were required to be fasted for 12 hours, and have undertaken no high intensity physical activity for the previous 48 hours. Despite no significant group changes in the MIT response between days, individual variability was high with an average between-day CV of 33%, which was not significantly improved by the use of a fixed RMR to 31%. The 95% limits of agreements which ranged from 9.9% of energy intake (%EI) to -10.7%EI with the baseline RMRs and between 9.6%EI to -12.4%EI with the fixed RMR, indicated very large changes relative to the size of the average MIT response (MIT 1: 8.4%EI, 13.3%EI; MIT 2: 8.8%EI, 14.7%EI; baseline and fixed RMRs respectively). After just three hours, the between-day CV with the baseline RMR was 26%, which may indicate an enhanced MIT reproducibility with shorter measurement durations. On average, 76, 89, and 96% of the six-hour MIT response was completed within three, four and five hours, respectively. Strong correlations were found between MIT at each of these time points and the total six-hour MIT (range for correlations r = 0.990 to 0.998; P < 0.01). The reproducibility of the proportion of the six-hour MIT completed at 3, 4 and 5 hours was reproducible (between-day CVs ≤ 8.5%). This indicated the suitability to use shorter durations on repeated occasions and a similar percent of the total response to be completed. There was a lack of strong evidence of any relationship between the magnitude of the MIT response and subjective postprandial appetite. Given a six-hour protocol places a considerable burden on participants, these results suggests that a post-meal measurement period of only three hours is sufficient to produce valid information on the metabolic response to a meal. However while there was no mean change in MIT between test days, individual variability was large. Further research is required to better understand which factors best explain the between-day variability in this physiological measure. With such a high prevalence of obesity, dieting has become a necessity to reduce body weight. However, during periods of ER, metabolic and appetite adaptations can occur which may impede weight loss. Understanding how metabolic and appetite factors change during ER and weight loss is important for designing optimal weight loss protocols. The purpose of Study Two was to measure the changes in the MIT response and subjective postprandial appetite during either continuous (CONT) or intermittent (INT) ER and following post diet energy balance (post-diet EB). Thirty-six obese male participants were randomly assigned to either the CONT (Age = 38.6 ± 7.0 years, weight = 109.8 ± 9.2 kg, % fat mass = 38.2 ± 5.2%) or INT diet groups (Age = 39.1 ± 9.1 years, weight = 107.1 ± 12.5 kg, % fat mass = 39.6 ± 6.8%). The study was divided into three phases: a four-week baseline (BL) phase where participants were provided with a diet to maintain body weight, an ER phase lasting either 16 (CONT) or 30 (INT) weeks, where participants were provided with a diet which supplied 67% of their energy balance requirements to induce weight loss and an eight-week post-diet EB phase, providing a diet to maintain body weight post weight loss. The INT ER phase was delivered as eight, two-week blocks of ER interspersed with two-week blocks designed to achieve weight maintenance. Energy requirements for each phase were predicted based on measured RMR, and adjusted throughout the study to account for changes in RMR. All participants completed MIT and appetite tests during BL and the ER phase. Nine CONT and 15 INT participants completed the post-diet EB MIT and 14 INT and 15 CONT participants completed the post-diet EB appetite tests. The MIT test day protocol was as follows: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard breakfast meal (874 kcal, 53.3% CHO, 14.5% PRO, 32.2% FAT), and 3) three hours of measuring MIT. MIT was calculated as the energy expenditure above the pre-meal RMR. Appetite test days were undertaken on a separate day using the same 576 kcal breakfast used in Study One. VAS were used to assess appetite pre and post breakfast, at one hour post breakfast then a further three times at 45-minute intervals. Appetite ratings were calculated for hunger and fullness as both the intra-meal change in appetite and the AUC. The three-hour MIT response at BL, ER and post-diet EB respectively were 5.4 ± 1.4%EI, 5.1 ± 1.3%EI and 5.0 ± 0.8%EI for the CONT group and 4.4 ± 1.0%EI, 4.7 ± 1.0%EI and 4.8 ± 0.8%EI for the INT group. Compared to BL, neither group had significant changes in their MIT response during ER or post-diet EB. There were no significant time by group interactions (p = 0.17) indicating a similar response to ER and post-diet EB in both groups. Contrary to what was hypothesised, there was a significant increase in postprandial AUC fullness in response to ER in both groups (p < 0.05). However, there were no significant changes in any of the other postprandial hunger or fullness variables. Despite no changes in MIT in both the CONT or INT group in response to ER or post-diet EB and only a minor increase in postprandial AUC fullness, the individual changes in MIT and postprandial appetite in response to ER were large. However those with the greatest MIT changes did not have the greatest changes in postprandial appetite. This study shows that postprandial appetite and MIT are unlikely to be altered during ER and are unlikely to hinder weight loss. Additionally, there were no changes in MIT in response to weight loss, indicating that body weight did not influence the magnitude of the MIT response. There were large individual changes in both variables, however further research is required to determine whether these changes were real compensatory changes to ER or simply between-day variation. Overall, the results of this thesis add to the current literature by showing the large variability of continuous MIT measurements, which make it difficult to compare MIT between groups and in response to diet interventions. This thesis was able to provide evidence to suggest that shorter measures may provide equally valid information about the total MIT response and can therefore be utilised in future research in order to reduce the burden of long measurements durations. This thesis indicates that MIT and postprandial subjective appetite are most likely independent of each other. This thesis also shows that, on average, energy restriction was not associated with compensatory changes in MIT and postprandial appetite that would have impeded weight loss. However, the large inter-individual variability supports the need to examine individual responses in more detail.
Resumo:
Several tests have been devised in an attempt to detect behaviour modification due to training, supplements or diet in horses. These tests rely on subjective observations in combination with physiological measures, such as heart rate (HR) and plasma cortisol concentrations, but these measures do not definitively identify behavioural changes. The aim of the present studies was to develop an objective and relevant measure of horse reactivity. In Study 1, HR responses to auditory stimuli, delivered over 6 days, designed to safely startle six geldings confined to individual stalls was studied to determine if peak HR, unconfounded by physical exertion, was a reliable measure of reactivity. Both mean (±SEM) resting HR (39.5 ± 1.9 bpm) and peak HR (82 ± 5.5 bpm) in response to being startled in all horses were found to be consistent over the 6 days. In Study 2, HR, plasma cortisol concentrations and speed of departure from an enclosure (reaction speed (RS)) in response to a single stimulus of six mares were measured when presented daily over 6 days. Peak HR response (133 ± 4 bpm) was consistent over days for all horses, but RS increased (3.02 ± 0.72 m/s on Day 1 increasing to 4.45 ± 0.53 m/s on Day 6; P = 0.005). There was no effect on plasma cortisol, so this variable was not studied further. In Study 3, using the six geldings from Study 1, the RS test was refined and a different startle stimulus was used each day. Again, there was no change in peak HR (97.2 ± 5.8 bpm) or RS (2.9 ± 0.2 m/s on Day 1 versus 3.0 ± 0.7 m/s on Day 6) over time. In the final study, mild sedation using acepromazine maleate (0.04 mg/kg BW i.v.) decreased peak HR in response to a startle stimulus when the horses (n = 8) were confined to a stall (P = 0.006), but not in an outdoor environment when the RS test was performed. However, RS was reduced by the mild sedation (P = 0.02). In conclusion, RS may be used as a practical and objective test to measure both reactivity and changes in reactivity in horses.
Resumo:
Background: Despite increasing diversity in pathways to adulthood, choices available to young people are influenced by environmental, familial and individual factors, namely access to socioeconomic resources, family support and mental and physical health status. Young people from families with higher socioeconomic position (SEP) are more likely to pursue tertiary education and delay entry to adulthood, whereas those from low socioeconomic backgrounds are less likely to attain higher education or training, and more likely to partner and become parents early. The first group are commonly termed ‘emerging adults’ and the latter group ‘early starters’. Mental health disorders during this transition can seriously disrupt psychological, social and academic development as well as employment prospects. Depression, anxiety and most substance use disorders have early onset during adolescence and early adulthood with approximately three quarters of lifetime psychiatric disorders having emerged by 24 years of age. Aims: This thesis aimed to explore the relationships between mental health, sociodemographic factors and family functioning during the transition to adulthood. Four areas were investigated: 1) The key differences between emerging adults and ‘early starters’, were examined and focused on a series of social, economic, and demographic factors as well as DSM-IV diagnoses; 2) Methodological issues associated with the measurement of depression and anxiety in young adults were explored by comparing a quantitative measure of symptoms of anxiety and depression (Achenbach’s YSR and YASR internalising scales) with DSM-IV diagnosed depression and anxiety. 3) The association between family SEP and DSM-IV depression and anxiety was examined in relation to the different pathways to adulthood. 4) Finally, the association between pregnancy loss, abortion and miscarriage, and DSM-IV diagnoses of common psychiatric disorders was assessed in young women who reported early parenting, experiencing a pregnancy loss, or who had never been pregnant. Methods: Data were taken from the Mater University Study of Pregnancy (MUSP), a large birth cohort started in 1981 in Brisbane, Australia. 7223 mothers and their children were assessed five times, at 6 months, 5, 14 and 21 years after birth. Over 3700 young adults, aged 18 to 23 years, were interviewed at the 21-year phase. Respondents completed an extensive series of self-reported questionnaires and a computerised structured psychiatric interview. Three outcomes were assessed at the 21-year phase. Mental health disorders diagnosed by a computerised structured psychiatric interview (CIDI-Auto), the prevalence of DSM-IV depression, anxiety and substance use disorders within the previous 12-month, during the transition (between ages of 18 and 23 years) or lifetime were examined. The primary outcome “current stage in the transition to adulthood” was developed using a measure conceptually constructed from the literature. The measure was based on important demographic markers, and these defined four independent groups: emerging adults (single with no children and living with parents), and three categories of ‘early starter’, singles (with no children or partner, living independently), those with a partner (married or cohabitating but without children) and parents. Early pregnancy loss was assessed using a measure that also defined four independent groups and was based on pregnancy outcomes in the young women This categorised the young women into those who were never pregnant, women who gave birth to a live child, and women who reported some form of pregnancy loss, either an abortion or a spontaneous miscarriage. A series of analyses were undertaken to test the study aims. Potential confounding and mediating factors were prospectively measured between the child’s birth and the 21-year phase. Binomial and multinomial logistic regression was used to estimate the risk of relevant outcomes, and the associations were reported as odds ratios (OR) and 95% confidence intervals (95%CI). Key findings: The thesis makes a number of important contributions to our understanding of the transition to adulthood, particularly in relation to the mental health consequences associated with different pathways. Firstly, findings from the thesis clearly showed that young people who parented or partnered early fared worse across most of the economic and social factors as well as the common mental disorders when compared to emerging adults. That is, young people who became early parents were also more likely to experience recent anxiety (OR=2.0, 95%CI 1.5-2.8) and depression (OR=1.7, 95%CI 1.1-2.7) than were emerging adults after taking into account a range of confounding factors. Singles and those partnering early also had higher rates of lifetime anxiety and depression than emerging adults. Young people who partnered early, but were without children, had decreased odds of recent depression; this may be due to the protective effect of early marriage against depression. It was also found that young people who form families early had an increased risk of cigarette smoking (parents OR=3.7, 95%CI 2.9-4.8) compared to emerging adults, but not heavy alcohol (parents OR=0.4, 95%CI 0.3-0.6) or recent illicit drug use. The high rates of cigarette smoking and tobacco use disorders in ‘early starters’ were explained by common risk factors related to early adversity and lower SEP. Having a child and early marriage may well function as a ‘turning point’ for some young people, it is not clear whether this is due to a conscious decision to disengage from a previous ‘substance using’ lifestyle or simply that they no longer have the time to devote to such activities because of child caring. In relation to the methodological issues associated with assessing common mental disorders in young adults, it was found that although the Achenbach empirical internalising scales successfully predicted both later DSM-IV depression (YSR OR=2.3, 95%CI 1.7-3.1) and concurrently diagnosed depression (YASR OR=6.9, 95%CI 5.0- 9.5) and anxiety (YASR OR=5.1, 95%CI 3.8- 6.7), the scales discriminated poorly between young people with or without DSM-IV diagnosed mood disorder. Sensitivity values (the proportion of true positives) for the internalising scales were surprisingly low. Only a third of young people with current DSM-IV depression (range for each of the scales was between 34% to 42%) were correctly identified as cases by the YASR internalising scales, and only a quarter with current anxiety disorder (range of 23% to 31%) were correctly identified. Also, use of the DSM-oriented scales increased sensitivity only marginally (for depression between 2-8%, and anxiety between 2-6%) above the standard Achenbach scales. This is despite the fact that the DSM-oriented scales were originally developed to overcome the poor prediction of DSM-IV diagnoses by the Achenbach scales. The internalising scales, both standard and DSM-oriented, were much more effective at identifying young people with comorbid depression and anxiety, with OR’s 10.1 to 21.7 depending on the internalising scale used. SEP is an important predictor of both an early transition to adulthood and the experience of anxiety during that time Family income during adolescence was a strong predictor of early parenting and partnering before age 24 but not early independent living. Compared to families in the upper quintile, young people from families with low income were nearly twice as likely to live with a partner and four times more likely to become parents (OR ranged from 2.6 to 4.0). This association remained after adjusting for current employment and education level. Children raised in low income families were 30% more likely to have an anxiety disorder (OR=1.3, 95%CI 0.9-1.9), but not depression, as young adults when compared to children from wealthier families. Emerging adults and ‘early starters’ from low income families did not differ in their likelihood of having a later anxiety disorder. Young women reporting a pregnancy loss had nearly three times the odds of experiencing a lifetime illicit drug disorder (excluding cannabis) [abortion OR=3.6, 95%CI 2.0-6.7 and miscarriage OR=2.6, 95%CI 1.2-5.4]. Abortion was associated with alcohol use disorder (OR=2.1, 95%CI 1.3- 3.5) and 12-month depression (OR=1.9, 95%CI 1.1- 3.1). These finding suggest that the association identified by Fergusson et al between abortion and later psychiatric disorders in young women may be due to pregnancy loss and not to abortion, per se. Conclusion: Findings from this thesis support the view that young people who parent or partner early have a greater burden of depression and anxiety when compared to emerging adults. As well, young women experiencing pregnancy loss, from either abortion or miscarriage, are more likely to experience depression and anxiety than are those who give birth to a live infant or who have never been pregnant. Depression, anxiety and substance use disorders often go unrecognised and untreated in young people; this is especially true in young people from lower SEP. Early identification of these common mental health disorders is important, as depression and anxiety experienced during the transition to adulthood have been found to seriously disrupt an individual’s social, educational and economic prospects in later life.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
Background Several lines of evidence suggests that transcription factors are involved in the pathogenesis of Multiple Sclerosis (MS) but a complete mapping the whole network has been elusive. One of the reasons is that there are several clinical subtypes of MS and transcription factors which may be involved in one subtype may not be in others. We investigated the possibility that this network could be mapped using microarray technologies and modern bioinformatics methods on a dataset from whole blood in 99 untreated MS patients (36 Relapse Remitting MS, 43 Primary Progressive MS, and 20 Secondary Progressive MS) and 45 age-matched healthy controls, Methodology/Principal Findings We have used two different analytical methodologies: a differential expression analysis and a differential co-expression analysis, which have converged on a significant number of regulatory motifs that seem to be statistically overrepresented in genes which are either differentially expressed (or differentially co-expressed) in cases and controls (e.g. V$KROX_Q6, p-value < 3.31E-6; V$CREBP1_Q2, p-value < 9.93E-6, V$YY1_02, p-value < 1.65E-5). Conclusions/significance: Our analysis uncovered a network of transcription factors that potentially dysregulate several genes in MS or one or more of its disease subtypes. Analysing the published literature we have found that these transcription factors are involved in the early T-lymphocyte specification and commitment as well as in oligodendrocytes dedifferentiation and development. The most significant transcription factors motifs were for the Early Growth response EGR/KROX family, ATF2, YY1 (Yin and Yang 1), E2F-1/DP-1 and E2F-4/DP-2 heterodimers, SOX5, and CREB and ATF families.