59 resultados para Models and Methods
Resumo:
The turn within urban policy to address increasingly complex social, economic and environmental problems has exposed some of the fragility of traditional measurement models and their reliance on the rational paradigm. This article looks at the experiences of the European Union (EU) Programme for Peace and Reconciliation in Northern Ireland and its particular attempt to construct new District Partnerships to deliver area-based regeneration programmes. It highlights the need to combine instrumental and interpretative evaluation methods in an attempt to explain the wider contribution of governance to conflict resolution and participatory practice in local development. It concludes by highlighting the value of conceptual approaches that deal with the politics of evaluation and the distributional effects of policy interventions designed to create new relationships within and between multiple stakeholders.
Resumo:
Timely and individualized feedback on coursework is desirable from a student perspective as it facilitates formative development and encourages reflective learning practice. Faculty however are faced with a significant and potentially time consuming challenge when teaching larger cohorts if they are to provide feedback which is timely, individualized and detailed. Additionally, for subjects which assess non-traditional submissions, such as Computer-Aided-Design (CAD), the methods for assessment and feedback tend not to be so well developed or optimized. Issues can also arise over the consistency of the feedback provided. Evaluations of Computer-Assisted feedback in other disciplines (Denton et al, 2008), (Croft et al, 2001) have shown students prefer this method of feedback to traditional “red pen” marking and also that such methods can be more time efficient for faculty.
Herein, approaches are described which make use of technology and additional software tools to speed up, simplify and automate assessment and the provision of feedback for large cohorts of first and second year engineering students studying modules where CAD files are submitted electronically. A range of automated methods are described and compared with more “manual” approaches. Specifically one method uses an application programming interface (API) to interrogate SolidWorks models and extract information into an Excel spreadsheet, which is then used to automatically send feedback emails. Another method describes the use of audio recordings made during model interrogation which reduces the amount of time while increasing the level of detail provided as feedback.
Limitations found with these methods and problems encountered are discussed along with a quantified assessment of time saving efficiencies made.
Resumo:
We consider the problem of train planning or scheduling for large, busy, complex train stations, which are common in Europe and elsewhere, though not in North America. We develop the constraints and objectives for this problem, but these are too computationally complex to solve by standard combinatorial search or integer programming methods. Also, the problem is somewhat political in nature, that is, it does not have a clear objective function because it involves multiple train operators with conflicting interests. We therefore develop scheduling heuristics analogous to those successfully adopted by train planners using ''manual'' methods. We tested the model and algorithms by applying to a typical large station that exhibits most of the complexities found in practice. The results compare well with those found by traditional methods, and take account of cost and preference trade-offs not handled by those methods. With successive refinements, the algorithm eventually took only a few seconds to run, the time depending on the version of the algorithm and the scheduling problem. The scheduling models and algorithms developed and tested here can be used on their own, or as key components for a more general system for train scheduling for a rail line or network.Train scheduling for a busy station includes ensuring that there are no conflicts between several hundred trains per day going in and out of the station on intersecting paths from multiple in-lines and out-lines to multiple platforms, while ensuring that each train is allowed at least its minimum required headways, dwell time, turnaround time and trip time. This has to be done while minimizing (costs of) deviations from desired times, platforms or lines, allowing for conflicts due to through-platforms, dead-end platforms, multiple sub-platforms, and possible constraints due to infrastructure, safety or business policy.
Resumo:
Aims.We use observations and models of molecular D/H ratios to probe the physical conditions and chemical history of the gas and to differentiate between gas-phase and grain-surface chemical processing in star forming regions. Methods: As a follow up to previous observations of HDCO/H2CO and DCN/HCN ratios in a selection of low-mass protostellar cores, we have measured D2CO/H2CO and N2D^+/N2H+ ratios in these same sources. For comparison, we have also measured N2D^+/N2H+ ratios towards several starless cores and have searched for N2D+ and deuterated formaldehyde towards hot molecular cores (HMCs) associated with high mass star formation. We compare our results with predictions from detailed chemical models, and to other observations made in these sources. Results: Towards the starless cores and low-mass protostellar sources we have found very high N2D+ fractionation, which suggests that the bulk of the gas in these regions is cold and heavily depleted. The non-detections of N2D+ in the HMCs indicate higher temperatures. We did detect HDCO towards two of the HMCs, with abundances 1-3% of H2CO. These are the first detections of deuterated formaldehyde in high mass sources since Turner (1990) measured HDCO/H2CO and D2CO/H2CO towards the Orion Compact Ridge. Figures 1-5 are only available in electronic form at http://www.aanda.org
Resumo:
Exam timetabling is one of the most important administrative activities that takes place in academic institutions. In this paper we present a critical discussion of the research on exam timetabling in the last decade or so. This last ten years has seen an increased level of attention on this important topic. There has been a range of significant contributions to the scientific literature both in terms of theoretical andpractical aspects. The main aim of this survey is to highlight the new trends and key research achievements that have been carried out in the last decade.We also aim to outline a range of relevant important research issues and challenges that have been generated by this body of work.
We first define the problem and review previous survey papers. Algorithmic approaches are then classified and discussed. These include early techniques (e.g. graph heuristics) and state-of-the-art approaches including meta-heuristics, constraint based methods, multi-criteria techniques, hybridisations, and recent new trends concerning neighbourhood structures, which are motivated by raising the generality of the approaches. Summarising tables are presented to provide an overall view of these techniques. We discuss some issues on decomposition techniques, system tools and languages, models and complexity. We also present and discuss some important issues which have come to light concerning the public benchmark exam timetabling data. Different versions of problem datasetswith the same name have been circulating in the scientific community in the last ten years which has generated a significant amount of confusion. We clarify the situation and present a re-naming of the widely studied datasets to avoid future confusion. We also highlight which research papershave dealt with which dataset. Finally, we draw upon our discussion of the literature to present a (non-exhaustive) range of potential future research directions and open issues in exam timetabling research.
Resumo:
Objective: The aim was to investigate the association between periodontal health and the serum levels of various antioxidants including carotenoids, retinol and vitamin E in a homogenous group of Western European men.
Materials and Methods: A representative sample of 1258 men aged 60-70 years, drawn from the population of Northern Ireland, was examined between 2001 and 2003. Each participant had six or more teeth, completed a questionnaire and underwent a clinical periodontal examination. Serum lipid-soluble antioxidant levels were measured by high-performance liquid chromatography with diode array detection. Multivariable analysis was carried out using logistic regression with adjustment for possible confounders. Models were constructed using two measures of periodontal status (low- and high-threshold periodontitis) as dependent variables and the fifths of each antioxidant as a predictor variable.
Results: The levels of a- and ß-carotene, ß-cryptoxanthin and zeaxanthin were highly significantly lower in the men with low-threshold periodontitis (p<0.001). These carotenoids were also significantly lower in high-threshold periodontitis. There were no significant differences in the levels of lutein, lycopene, a- and ?-tocopherol or retinol in relation to periodontitis. In fully adjusted models, there was an inverse relationship between a number of carotenoids (a- and ß-carotene and ß-cryptoxanthin) and low-threshold periodontitis. ß-Carotene and ß-cryptoxanthin were the only antioxidants that were associated with an increased risk of high-threshold severe periodontitis. The adjusted odds ratio for high-threshold periodontitis in the lowest fifth relative to the highest fifth of ß-cryptoxanthin was 4.02 (p=0.003).
Conclusion: It is concluded that low serum levels of a number of carotenoids, in particular beta-cryptoxanthin and beta-carotene, were associated with an increased prevalence of periodontitis in this homogenous group of 60-70-year-old Western European men.
Resumo:
Glucagon-like peptide-1(7-36)amide (tGLP-1) is an important insulin-releasing hormone of the enteroinsular axis which is secreted by endocrine L-cells of the small intestine following nutrient ingestion. The present study has evaluated tGLP-1 in the intestines of normal and diabetic animal models and estimated the proportion present in glycated form. Total immunoreactive tGLP-1 levels in the intestines of hyperglycaemic hydrocortisone-treated rats, streptozotocin-treated mice and ob/ob mice were similar to age-matched controls. Affinity chromatographic separation of glycated and non-glycated proteins in intestinal extracts followed by radioimmunoassay using a fully crossreacting anti-serum demonstrated the presence of glycated tGLP-1 within the intestinal extracts of all control animals (approximately 19%., of total tGLP-1 content). Chemically induced and spontaneous animal models of diabetes were found to possess significantly greater levels of glycated tGLP-1 than controls, corresponding to between 24-71% of the total content. These observations suggest that glycated tGLP-1 may be of physiological significance given that such N-terminal modification confers resistance to DPP IV inactivation and degradation, extending the very short half-life (
Resumo:
Objective: to assess the separate contributions of marital status, living arrangements and the presence of children to subsequent admission to a care home.
Design and methods: a longitudinal study derived from the health card registration system and linked to the 2001 Census, comprising 28% of the Northern Ireland population was analysed using Cox regression to assess the likelihood of admission for 51,619 older people in the 6 years following the census. Cohort members’ age, sex, marital and health status and relationship to other household members were analysed.
Results: there were 2,138 care home admissions; a rate of 7.4 admissions per thousand person years. Those living alone had the highest likelihood of admission [hazard ratio (HR) compared with living with partner 1.66 (95% CI 1.48, 1.87)] but there was little difference between the never-married and the previously married. Living with children offered similar protection as living with a partner (HR 0.97; 95% CI 0.81, 1.16). The presence of children reduced admissions especially for married couples (HR 0.67 95% CI 0.54, 0.83; models adjusting for age, gender and health). Women were more likely to be admitted, though there were no gender differences for people living alone or those co-habiting with siblings.
Implications: presence of potential caregivers within the home, rather than those living elsewhere, is a major factor determining admission to care home. Further research should concentrate on the health and needs of these co-residents.
Resumo:
BACKGROUND: To date, there are no clinically reliable predictive markers of response to the current treatment regimens for advanced colorectal cancer. The aim of the current study was to compare and assess the power of transcriptional profiling using a generic microarray and a disease-specific transcriptome-based microarray. We also examined the biological and clinical relevance of the disease-specific transcriptome.
METHODS: DNA microarray profiling was carried out on isogenic sensitive and 5-FU-resistant HCT116 colorectal cancer cell lines using the Affymetrix HG-U133 Plus2.0 array and the Almac Diagnostics Colorectal cancer disease specific Research tool. In addition, DNA microarray profiling was also carried out on pre-treatment metastatic colorectal cancer biopsies using the colorectal cancer disease specific Research tool. The two microarray platforms were compared based on detection of probesets and biological information.
RESULTS: The results demonstrated that the disease-specific transcriptome-based microarray was able to out-perform the generic genomic-based microarray on a number of levels including detection of transcripts and pathway analysis. In addition, the disease-specific microarray contains a high percentage of antisense transcripts and further analysis demonstrated that a number of these exist in sense:antisense pairs. Comparison between cell line models and metastatic CRC patient biopsies further demonstrated that a number of the identified sense:antisense pairs were also detected in CRC patient biopsies, suggesting potential clinical relevance.
CONCLUSIONS: Analysis from our in vitro and clinical experiments has demonstrated that many transcripts exist in sense:antisense pairs including IGF2BP2, which may have a direct regulatory function in the context of colorectal cancer. While the functional relevance of the antisense transcripts has been established by many studies, their functional role is currently unclear; however, the numbers that have been detected by the disease-specific microarray would suggest that they may be important regulatory transcripts. This study has demonstrated the power of a disease-specific transcriptome-based approach and highlighted the potential novel biologically and clinically relevant information that is gained when using such a methodology.
Resumo:
Purpose
This study was designed to investigate methods to help patients suffering from unilateral tinnitus synthesizing an auditory replica of their tinnitus.
Materials and methods
Two semi-automatic methods (A and B) derived from the auditory threshold of the patient and a method (C) combining a pure tone and a narrow band-pass noise centred on an adjustable frequency were devised and rated on their likeness over two test sessions. A third test evaluated the stability over time of the synthesized tinnitus replica built with method C, and its proneness to merge with the patient's tinnitus. Patients were then asked to try and control the lateralisation of this single percept through the adjustment of the tinnitus replica level.
Results
The first two tests showed that seven out of ten patients chose the tinnitus replica built with method C as their preferred one. The third test, performed on twelve patients, revealed pitch tuning was rather stable over a week interval. It showed that eight patients were able to consistently match the central frequency of the synthesized tinnitus (presented to the contralateral ear) to their own tinnitus, which leaded to a unique tinnitus percept. The lateralisation displacement was consistent across patients and revealed an average range of 29dB to obtain a full lateral shift from the ipsilateral to the contralateral side.
Conclusions
Although spectrally simpler than the semi-automatic methods, method C could replicate patients' tinnitus, to some extent. When a unique percept between synthesized tinnitus and patients' tinnitus arose, lateralisation of this percept was achieved.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
Objective: To examine the evidence of an association between hypermobility and musculoskeletal pain in children. Methods: A systematic review of the literature was performed using the databases PubMed, EMBASE, NHS Evidence, and Medline. Inclusion criteria were observational studies investigating hypermobility and musculoskeletal pain in children. Exclusion criteria were studies conducted on specialist groups (i.e. dancers) or hospital referrals. Pooled odds ratios (ORs) were calculated using random effects models and heterogeneity was tested using ?(2)-tests. Study quality was assessed using the Newcastle-Ottawa Scale for case-control studies. Results: Of the 80 studies identified, 15 met the inclusion criteria and were included in the review. Of these, 13 were included in the statistical analyses. Analysing the data showed that the heterogeneity was too high to allow for interpretation of the meta-analysis (I(2) = 72%). Heterogeneity was much lower when the studies were divided into European (I(2) = 8%) and Afro-Asian subgroups (I(2) = 65%). Sensitivity analysis based on data from studies reporting from European and Afro-Asian regions showed no association in the European studies [OR 1.00, 95% confidence interval (CI) 0.79-1.26] but a marked relationship between hypermobility and joint pain in the Afro-Asian group (OR 2.01, 95% CI 1.45-2.77). Meta-regression showed a highly significant difference between subgroups in both meta-analyses (p <0.001). Conclusion: There seems to be no association between hypermobility and joint pain in Europeans. There does seem to be an association in Afro-Asians; however, there was a high heterogeneity. It is unclear whether this is due to differences in ethnicity, nourishment, climate or study design.
Resumo:
RATIONALE Stable isotope values (d13C and d15N) of darted skin and blubber biopsies can shed light on habitat use and diet of cetaceans, which are otherwise difficult to study. Non-dietary factors affect isotopic variability, chiefly the depletion of C due to the presence of C-rich lipids. The efficacy of post hoc lipid-correction models (normalization) must be tested. METHODS For tissues with high natural lipid content (e.g., whale skin and blubber), chemical lipid extraction or normalization is necessary. C:N ratios, d13C values and d15N values were determined for duplicate control and lipid-extracted skin and blubber of fin (Balaenoptera physalus), humpback (Megaptera novaeangliae) and minke whales (B. acutorostrata) by continuous-flow elemental analysis isotope ratio mass spectrometry (CF-EA-IRMS). Six different normalization models were tested to correct d13C values for the presence of lipids. RESULTS Following lipid extraction, significant increases in d13C values were observed for both tissues in the three species. Significant increases were also found for d15N values in minke whale skin and fin whale blubber. In fin whale skin, the d15N values decreased, with no change observed in humpback whale skin. Non-linear models generally out-performed linear models and the suitability of models varied by species and tissue, indicating the need for high model specificity, even among these closely related taxa. CONCLUSIONS Given the poor predictive power of the models to estimate lipid-free d13C values, and the unpredictable changes in d N values due to lipid-extraction, we recommend against arithmetical normalization in accounting for lipid effects on d13C values for balaenopterid skin or blubber samples. Rather, we recommend that duplicate analysis of lipid-extracted (d13C values) and non-treated tissues (d15N values) be used. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
An evolution in theoretical models and methodological paradigms for investigating cognitive biases in the addictions is discussed. Anomalies in traditional cognitive perspectives, and problems with the self-report methods which underpin them, are highlighted. An emergent body of cognitive research, contextualized within the principles and paradigms of cognitive neuropsychology rather than social learning theory, is presented which, it is argued, addresses these anomalies and problems. Evidence is presented that biases in the processing of addiction-related stimuli, and in the network of propositions which motivate addictive behaviours, occur at automatic, implicit and pre-conscious levels of awareness. It is suggested that methods which assess such implicit cognitive biases (e.g. Stroop, memory, priming and reaction-time paradigms) yield findings which have better predictive utility for ongoing behaviour than those biases determined by self-report methods of introspection. The potential utility of these findings for understanding "loss of control" phenomena, and the desynchrony between reported beliefs and intentions and ongoing addictive behaviours, is discussed. Applications to the practice of cognitive therapy are considered.
Resumo:
OBJECTIVE-To examine associations of neonatal adiposity with maternal glucose levels and cord serum C-peptide in a multicenter multinational study, the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) Study, thereby assessing the Pederson hypothesis linking maternal glycemia and fetal hyperinsulinemia to neonatal adiposity. RESEARCH DESIGN AND METHODS-Eligible pregnant women underwent a standard 75-g oral glucose tolerance test between 24 and 32 weeks gestation (as close to 28 weeks as possible). Neonatal anthropometrics and cord serum C-peptide were measured. Associations of maternal glucose and cord serum C-peptide with neonatal adiposity (sum of skin folds >90th percentile or percent body fat >90th percentile) were assessed using multiple logistic regression analyses, with adjustment for potential confounders, including maternal age, parity, BMI, mean arterial pressure, height, gestational age at delivery, and the baby's sex. RESULTS-Among 23,316 HAPO Study participants with glucose levels blinded to caregivers, cord serum C-peptide results were available for 19,885 babies and skin fold measurements for 19,389. For measures of neonatal adiposity, there were strong statistically significant gradients across increasing levels of maternal glucose and cord serum C-peptide, which persisted after adjustment for potential confounders. In fully adjusted continuous variable models, odds ratios ranged from 1.35 to 1.44 for the two measures of adiposity for fasting, 1-h, and 2-h plasma glucose higher by 1 SD. CONCLUSIONS-These findings confirm the link between maternal glucose and neonatal adiposity and suggest that the relationship is mediated by fetal insulin production and that the Pedersen hypothesis describes a basic biological relationship influencing fetal growth. © 2009 by the American Diabetes Association.