972 resultados para Systematic mapping
Resumo:
Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.
Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models’ characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.
Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 0–32), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.
Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.
Resumo:
This paper presents a method of spatial sampling based on stratification by Local Moran’s I i calculated using auxiliary information. The sampling technique is compared to other design-based approaches including simple random sampling, systematic sampling on a regular grid, conditional Latin Hypercube sampling and stratified sampling based on auxiliary information, and is illustrated using two different spatial data sets. Each of the samples for the two data sets is interpolated using regression kriging to form a geostatistical map for their respective areas. The proposed technique is shown to be competitive in reproducing specific areas of interest with high accuracy.
Resumo:
Assurance of learning is a predominant feature in both quality enhancement and assurance in higher education. Assurance of learning is a process that articulates explicit program outcomes and standards, and systematically gathers evidence to determine the extent to which performance matches expectations. Benefits accrue to the institution through the systematic assessment of whole of program goals. Data may be used for continuous improvement, program development, and to inform external accreditation and evaluation bodies. Recent developments, including the introduction of the Tertiary Education and Quality Standards Agency (TEQSA) will require universities to review the methods they use to assure learning outcomes. This project investigates two critical elements of assurance of learning: 1. the mapping of graduate attributes throughout a program; and 2. the collection of assurance of learning data. An audit was conducted with 25 of the 39 Business Schools in Australian universities to identify current methods of mapping graduate attributes and for collecting assurance of learning data across degree programs, as well as a review of the key challenges faced in these areas. Our findings indicate that external drivers like professional body accreditation (for example: Association to Advance Collegiate Schools of Business (AACSB)) and TEQSA are important motivators for assuring learning, and those who were undertaking AACSB accreditation had more robust assurance of learning systems in place. It was reassuring to see that the majority of institutions (96%) had adopted an embedding approach to assuring learning rather than opting for independent standardised testing. The main challenges that were evident were the development of sustainable processes that were not considered a burden to academic staff, and obtainment of academic buy in to the benefits of assuring learning per se rather than assurance of learning being seen as a tick box exercise. This cultural change is the real challenge in assurance of learning practice.
Resumo:
Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.
Resumo:
Background: There are inequalities in geographical access and delivery of health care services in Australia, particularly for cardiovascular disease (CVD), Australia's major cause of death. Analyses and models that can inform and positively influence strategies to augment services and preventative measures are needed. The Cardiac-ARIA project is using geographical spatial technology (GIS) to develop a national index for each of Australia's 13,000 population centres. The index will describe the spatial distribution of CVD health care services available to support populations at risk, in a timely manner, after a major cardiac event. Methods: In the initial phase of the project, an expert panel of cardiologists and an emergency physician have identified key elements of national and international guidelines for management of acute coronary syndromes, cardiac arrest, life-threatening arrhythmias and acute heart failure, from the time of onset (potentially dial 000) to return from the hospital to the community (cardiac rehabilitation). Results: A systematic search has been undertaken to identify the geographical location of, and type of, cardiac services currently available. This has enabled derivation of a master dataset of necessary services, e.g. telephone networks, ambulance, RFDS, helicopter retrieval services, road networks, hospitals, general practitioners, medical community centres, pathology services, CCUs, catheterisation laboratories, cardio-thoracic surgery units and cardiac rehabilitation services. Conclusion: This unique and innovative project has the potential to deliver a powerful tool to both highlight and combat the burden of disease of CVD in urban and regional Australia.
Resumo:
Sustainability has emerged as a primary context for engineering education in the 21st Century, particularly the sub-discipline of chemical engineering. However, there is confusion over how to go about integrating sustainability knowledge and skills systemically within bachelor degrees. This paper addresses this challenge, using a case study of an Australian chemical engineering degree to highlight important practical considerations for embedding sustainability at the core of the curriculum. The paper begins with context for considering a systematic process for rapid curriculum renewal. The authors then summarise a 2-year federally funded project, which comprised piloting a model for rapid curriculum renewal led by the chemical engineering staff. Model elements contributing to the renewal of this engineering degree and described in this paper include: industry outreach; staff professional development; attribute identification and alignment; program mapping; and curriculum and teaching resource development. Personal reflections on the progress and process of rapid curriculum renewal in sustainability by the authors and participating engineering staff will be presented as a means to discuss and identify methodological improvements, as well as highlight barriers to project implementation. It is hoped that this paper will provide an example of a formalised methodology on which program reform and curriculum renewal for sustainability can be built upon in other higher education institutions.
Resumo:
Enormous progress has been made towards understanding the role of specific factors in the process of epithelial-mesenchymal transition (EMT); however, the complex underlying pathways and the transient nature of the transition continues to present significant challenges. Targeting tumour cell plasticity underpinning EMT is an attractive strategy to combat metastasis. Global gene expression profiling and high-content analyses are among the strategies employed to identify novel EMT regulators. In this review, we highlight several approaches to systematically interrogate key pathways involved in EMT, with particular emphasis on the features of multiparametric, high-content imaging screening strategies that lend themselves to the systematic discovery of highly significant modulators of tumour cell plasticity.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
Assurance of learning (AOL) is a quality enhancement and quality assurance process used in higher education. It involves a process of determining programme learning outcomes and standards, and systematically gathering evidence to measure students' performance on these. The systematic assessment of whole-of-programme outcomes provides a basis for curriculum development and management, continuous improvement, and accreditation. To better understand how AOL processes operate, a national study of university practices across one discipline area, business and management, was undertaken. To solicit data on AOL practice, interviews were undertaken with a sample of business school representatives (n = 25). Two key processes emerged: (1) mapping of graduate attributes and (2) collection of assurance data. External drivers such as professional accreditation and government legislation were the primary reasons for undertaking AOL outcomes but intrinsic motivators in relation to continuous improvement were also evident. The facilitation of academic commitment was achieved through an embedded approach to AOL by the majority of universities in the study. A sustainable and inclusive process of AOL was seen to support wider stakeholder engagement in the development of higher education learning outcomes.
Resumo:
We developed an anatomical mapping technique to detect hippocampal and ventricular changes in Alzheimer disease (AD). The resulting maps are sensitive to longitudinal changes in brain structure as the disease progresses. An anatomical surface modeling approach was combined with surface-based statistics to visualize the region and rate of atrophy in serial MRI scans and isolate where these changes link with cognitive decline. Fifty-two high-resolution MRI scans were acquired from 12 AD patients (age: 68.4 ± 1.9 years) and 14 matched controls (age: 71.4 ± 0.9 years), each scanned twice (2.1 ± 0.4 years apart). 3D parametric mesh models of the hippocampus and temporal horns were created in sequential scans and averaged across subjects to identify systematic patterns of atrophy. As an index of radial atrophy, 3D distance fields were generated relating each anatomical surface point to a medial curve threading down the medial axis of each structure. Hippocampal atrophic rates and ventricular expansion were assessed statistically using surface-based permutation testing and were faster in AD than in controls. Using color-coded maps and video sequences, these changes were visualized as they progressed anatomically over time. Additional maps localized regions where atrophic changes linked with cognitive decline. Temporal horn expansion maps were more sensitive to AD progression than maps of hippocampal atrophy, but both maps correlated with clinical deterioration. These quantitative, dynamic visualizations of hippocampal atrophy and ventricular expansion rates in aging and AD may provide a promising measure to track AD progression in drug trials.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
25 p.
Resumo:
BACKGROUND: Individuals with osteoporosis are predisposed to hip fracture during trips, stumbles or falls, but half of all hip fractures occur in those without generalised osteoporosis. By analysing ordinary clinical CT scans using a novel cortical thickness mapping technique, we discovered patches of markedly thinner bone at fracture-prone regions in the femurs of women with acute hip fracture compared with controls. METHODS: We analysed CT scans from 75 female volunteers with acute fracture and 75 age- and sex-matched controls. We classified the fracture location as femoral neck or trochanteric before creating bone thickness maps of the outer 'cortical' shell of the intact contra-lateral hip. After registration of each bone to an average femur shape and statistical parametric mapping, we were able to visualise and quantify statistically significant foci of thinner cortical bone associated with each fracture type, assuming good symmetry of bone structure between the intact and fractured hip. The technique allowed us to pinpoint systematic differences and display the results on a 3D average femur shape model. FINDINGS: The cortex was generally thinner in femoral neck fracture cases than controls. More striking were several discrete patches of statistically significant thinner bone of up to 30%, which coincided with common sites of fracture initiation (femoral neck or trochanteric). INTERPRETATION: Femoral neck fracture patients had a thumbnail-sized patch of focal osteoporosis at the upper head-neck junction. This region coincided with a weak part of the femur, prone to both spontaneous 'tensile' fractures of the femoral neck, and as a site of crack initiation when falling sideways. Current hip fracture prevention strategies are based on case finding: they involve clinical risk factor estimation to determine the need for single-plane bone density measurement within a standard region of interest (ROI) of the femoral neck. The precise sites of focal osteoporosis that we have identified are overlooked by current 2D bone densitometry methods.
Resumo:
Spatial normalisation is a key element of statistical parametric mapping and related techniques for analysing cohort statistics on voxel arrays and surfaces. The normalisation process involves aligning each individual specimen to a template using some sort of registration algorithm. Any misregistration will result in data being mapped onto the template at the wrong location. At best, this will introduce spatial imprecision into the subsequent statistical analysis. At worst, when the misregistration varies systematically with a covariate of interest, it may lead to false statistical inference. Since misregistration generally depends on the specimen's shape, we investigate here the effect of allowing for shape as a confound in the statistical analysis, with shape represented by the dominant modes of variation observed in the cohort. In a series of experiments on synthetic surface data, we demonstrate how allowing for shape can reveal true effects that were previously masked by systematic misregistration, and also guard against misinterpreting systematic misregistration as a true effect. We introduce some heuristics for disentangling misregistration effects from true effects, and demonstrate the approach's practical utility in a case study of the cortical bone distribution in 268 human femurs.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.