256 resultados para Computational tools
Resumo:
Social media are becoming increasingly integrated into political practices around the world. Politicians, citizens and journalists employ new media tools to support and supplement their political goals. This report examines the way in which social media are portrayed as political tools in Australian mainstream media in order to establish what the relations are between social media and mainstream media in political news reporting. Through the close content-analysis of 93 articles sampled from the years 2008, 2010 and 2012, we provide a longitudinal insight into how the perception by Australian journalists and news media organisations of social media as political tools has changed over time. As the mainstream media remain crucial in framing the public understanding of new technologies and practices, this enhances our understanding of the positioning of social media tools for political communication.
Resumo:
The candidate gene approach has been a pioneer in the field of genetic epidemiology, identifying risk alleles and their association with clinical traits. With the advent of rapidly changing technology, there has been an explosion of in silico tools available to researchers, giving them fast, efficient resources and reliable strategies important to find casual gene variants for candidate or genome wide association studies (GWAS). In this review, following a description of candidate gene prioritisation, we summarise the approaches to single nucleotide polymorphism (SNP) prioritisation and discuss the tools available to assess functional relevance of the risk variant with consideration to its genomic location. The strategy and the tools discussed are applicable to any study investigating genetic risk factors associated with a particular disease. Some of the tools are also applicable for the functional validation of variants relevant to the era of GWAS and next generation sequencing (NGS).
Resumo:
Background The management of unruptured aneurysms is controversial with the decision to treat influenced by aneurysm characteristics including size and morphology. Aneurysmal bleb formation is thought to be associated with an increased risk of rupture. Objective To correlate computational fluid dynamic (CFD) indices with bleb formation. Methods Anatomical models were constructed from three-dimensional rotational angiogram (3DRA) data in 27 patients with cerebral aneurysms harbouring single blebs. Additional models representing the aneurysm before bleb formation were constructed by digitally removing the bleb. We characterised haemodynamic features of models both with and without the bleb using CFDs. Flow structure, wall shear stress (WSS), pressure and oscillatory shear index (OSI) were analysed. Results There was a statistically significant association between bleb location at or adjacent to the point of maximal WSS (74.1%, p=0.019), irrespective of rupture status. Aneurysmal blebs were related to the inflow or outflow jet in 88.9% of cases (p<0.001) whilst 11.1% were unrelated. Maximal wall pressure and OSI were not significantly related to bleb location. The bleb region attained a lower WSS following its formation in 96.3% of cases (p<0.001) and was also lower than the average aneurysm WSS in 86% of cases (p<0.001). Conclusion Cerebral aneurysm blebs generally form at or adjacent to the point of maximal WSS and are aligned with major flow structures. Wall pressure and OSI do not contribute to determining bleb location. The measurement of WSS using CFD models may potentially predict bleb formation and thus improve the assessment of rupture risk in unruptured aneurysms.
Resumo:
In the recent decision Association for Molecular Pathology v. Myriad Genetics1, the US Supreme Court held that naturally occurring sequences from human genomic DNA are not patentable subject matter. Only certain complementary DNAs (cDNA), modified sequences and methods to use sequences are potentially patentable. It is likely that this distinction will hold for all DNA sequences, whether animal, plant or microbial2. However, it is not clear whether this means that other naturally occurring informational molecules, such as polypeptides (proteins) or polysaccharides, will also be excluded from patents. The decision underscores a pressing need for precise analysis of patents that disclose and reference genetic sequences, especially in the claims. Similarly, data sets, standards compliance and analytical tools must be improved—in particular, data sets and analytical tools must be made openly accessible—in order to provide a basis for effective decision making and policy setting to support biological innovation. Here, we present a web-based platform that allows such data aggregation, analysis and visualization in an open, shareable facility. To demonstrate the potential for the extension of this platform to global patent jurisdictions, we discuss the results of a global survey of patent offices that shows that much progress is still needed in making these data freely available for aggregation in the first place.
Resumo:
Flexible fixation or the so-called ‘biological fixation’ has been shown to encourage the formation of fracture callus, leading to better healing outcomes. However, the nature of the relationship between the degree of mechanical stability provided by a flexible fixation and the optimal healing outcomes has not been fully understood. In this study, we have developed a validated quantitative model to predict how cells in fracture callus might respond to change in their mechanical microenvironment due to different configurations of locking compression plate (LCP) in clinical practice, particularly in the early stage of healing. The model predicts that increasing flexibility of the LCP by changing the bone–plate distance (BPD) or the plate working length (WL) could enhance interfragmentary strain in the presence of a relatively large gap size (.3 mm). Furthermore, conventional LCP normally results in asymmetric tissue development during early stage of callus formation, and the increase of BPD or WL is insufficient to alleviate this problem.
Resumo:
Flow induced shear stress plays an important role in regulating cell growth and distribution in scaffolds. This study sought to correlate wall shear stress and chondrocytes activity for engineering design of micro-porous osteochondral grafts based on the hypothesis that it is possible to capture and discriminate between the transmitted force and cell response at the inner irregularities. Unlike common tissue engineering therapies with perfusion bioreactors in which flow-mediated stress is the controlling parameter, this work assigned the associated stress as a function of porosity to influence in vitro proliferation of chondrocytes. D-optimality criterion was used to accommodate three pore characteristics for appraisal in a mixed level fractional design of experiment (DOE); namely, pore size (4 levels), distribution pattern (2 levels) and density (3 levels). Micro-porous scaffolds (n=12) were fabricated according to the DOE using rapid prototyping of an acrylic-based bio-photopolymer. Computational fluid dynamics (CFD) models were created correspondingly and used on an idealized boundary condition with a Newtonian fluid domain to simulate the dynamic microenvironment inside the pores. In vitro condition was reproduced for the 3D printed constructs seeded by high pellet densities of human chondrocytes and cultured for 72 hours. The results showed that cell proliferation was significantly different in the constructs (p<0.05). Inlet fluid velocity of 3×10-2mms-1 and average shear stress of 5.65×10-2 Pa corresponded with increased cell proliferation for scaffolds with smaller pores in hexagonal pattern and lower densities. Although the analytical solution of a Poiseuille flow inside the pores was found insufficient for the description of the flow profile probably due to the outside flow induced turbulence, it showed that the shear stress would increase with cell growth and decrease with pore size. This correlation demonstrated the basis for determining the relation between the induced stress and chondrocyte activity to optimize microfabrication of engineered cartilaginous constructs.
Resumo:
This paper presents some theoretical and interdisciplinary perspectives that might inform the design and development of information and communications technology (ICT) tools to support reflective inquiry during e-learning. The role of why-questioning provides the focus of discussion and is guided by literature that spans critical thinking, inquiry-based and problem-based learning, storytelling, sense-making, and reflective practice, as well as knowledge management, information science, computational linguistics and automated question generation. It is argued that there exists broad scope for the development of ICT scaffolding targeted at supporting reflective inquiry duringe-learning. Evidence suggests that wiki-based learning tasks, digital storytelling, and e-portfolio tools demonstrate the value of accommodating reflective practice and explanatory content in supporting learning; however, it is also argued that the scope for ICT tools that directly support why-questioning as a key aspect of reflective inquiry is a frontier ready for development.
Resumo:
In this age of rapidly evolving technology, teachers are encouraged to adopt ICTs by government, syllabus, school management, and parents. Indeed, it is an expectation that teachers will incorporate technologies into their classroom teaching practices to enhance the learning experiences and outcomes of their students. In particular, regarding the science classroom, a subject that traditionally incorporates hands-on experiments and practicals, the integration of modern technologies should be a major feature. Although myriad studies report on technologies that enhance students’ learning outcomes in science, there is a dearth of literature on how teachers go about selecting technologies for use in the science classroom. Teachers can feel ill prepared to assess the range of available choices and might feel pressured and somewhat overwhelmed by the avalanche of new developments thrust before them in marketing literature and teaching journals. The consequences of making bad decisions are costly in terms of money, time and teacher confidence. Additionally, no research to date has identified what technologies science teachers use on a regular basis, and whether some purchased technologies have proven to be too problematic, preventing their sustained use and possible wider adoption. The primary aim of this study was to provide research-based guidance to teachers to aid their decision-making in choosing technologies for the science classroom. The study unfolded in several phases. The first phase of the project involved survey and interview data from teachers in relation to the technologies they currently use in their science classrooms and the frequency of their use. These data were coded and analysed using Grounded Theory of Corbin and Strauss, and resulted in the development of a PETTaL model that captured the salient factors of the data. This model incorporated usability theory from the Human Computer Interaction literature, and education theory and models such as Mishra and Koehler’s (2006) TPACK model, where the grounded data indicated these issues. The PETTaL model identifies Power (school management, syllabus etc.), Environment (classroom / learning setting), Teacher (personal characteristics, experience, epistemology), Technology (usability, versatility etc.,) and Learners (academic ability, diversity, behaviour etc.,) as fields that can impact the use of technology in science classrooms. The PETTaL model was used to create a Predictive Evaluation Tool (PET): a tool designed to assist teachers in choosing technologies, particularly for science teaching and learning. The evolution of the PET was cyclical (employing agile development methodology), involving repeated testing with in-service and pre-service teachers at each iteration, and incorporating their comments i ii in subsequent versions. Once no new suggestions were forthcoming, the PET was tested with eight in-service teachers, and the results showed that the PET outcomes obtained by (experienced) teachers concurred with their instinctive evaluations. They felt the PET would be a valuable tool when considering new technology, and it would be particularly useful as a means of communicating perceived value between colleagues and between budget holders and requestors during the acquisition process. It is hoped that the PET could make the tacit knowledge acquired by experienced teachers about technology use in classrooms explicit to novice teachers. Additionally, the PET could be used as a research tool to discover a teachers’ professional development needs. Therefore, the outcomes of this study can aid a teacher in the process of selecting educationally productive and sustainable new technology for their science classrooms. This study has produced an instrument for assisting teachers in the decision-making process associated with the use of new technologies for the science classroom. The instrument is generic in that it can be applied to all subject areas. Further, this study has produced a powerful model that extends the TPACK model, which is currently extensively employed to assess teachers’ use of technology in the classroom. The PETTaL model grounded in data from this study, responds to the calls in the literature for TPACK’s further development. As a theoretical model, PETTaL has the potential to serve as a framework for the development of a teacher’s reflective practice (either self evaluation or critical evaluation of observed teaching practices). Additionally, PETTaL has the potential for aiding the formulation of a teacher’s personal professional development plan. It will be the basis for further studies in this field.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
Carbon nanotubes with specific nitrogen doping are proposed for controllable, highly selective, and reversible CO2 capture. Using density functional theory incorporating long-range dispersion corrections, we investigated the adsorption behavior of CO2 on (7,7) single-walled carbon nanotubes (CNTs) with several nitrogen doping configurations and varying charge states. Pyridinic-nitrogen incorporation in CNTs is found to induce an increasing CO2 adsorption strength with electron injecting, leading to a highly selective CO2 adsorption in comparison with N2. This functionality could induce intrinsically reversible CO2 adsorption as capture/release can be controlled by switching the charge carrying state of the system on/off. This phenomenon is verified for a number of different models and theoretical methods, with clear ramifications for the possibility of implementation with a broader class of graphene-based materials. A scheme for the implementation of this remarkable reversible electrocatalytic CO2-capture phenomenon is considered.
Resumo:
Background: Appropriate disposition of emergency department (ED) patients with chest pain is dependent on clinical evaluation of risk. A number of chest pain risk stratification tools have been proposed. The aim of this study was to compare the predictive performance for major adverse cardiac events (MACE) using risk assessment tools from the National Heart Foundation of Australia (HFA), the Goldman risk score and the Thrombolysis in Myocardial Infarction risk score (TIMI RS). Methods: This prospective observational study evaluated ED patients aged ≥30 years with non-traumatic chest pain for which no definitive non-ischemic cause was found. Data collected included demographic and clinical information, investigation findings and occurrence of MACE by 30 days. The outcome of interest was the comparative predictive performance of the risk tools for MACE at 30 days, as analyzed by receiver operator curves (ROC). Results: Two hundred eighty-one patients were studied; the rate of MACE was 14.1%. Area under the curve (AUC) of the HFA, TIMI RS and Goldman tools for the endpoint of MACE was 0.54, 0.71 and 0.67, respectively, with the difference between the tools in predictive ability for MACE being highly significant [chi2 (3) = 67.21, N = 276, p < 0.0001]. Conclusion: The TIMI RS and Goldman tools performed better than the HFA in this undifferentiated ED chest pain population, but selection of cutoffs balancing sensitivity and specificity was problematic. There is an urgent need for validated risk stratification tools specific for the ED chest pain population.
Resumo:
Capturing and sequestering carbon dioxide (CO2) can provide a route to partial mitigation of climate change associated with anthropogenic CO2 emissions. Here we report a comprehensive theoretical study of CO2 adsorption on two phases of boron, α-B12 and γ-B28. The theoretical results demonstrate that the electron deficient boron materials, such as α-B12 and γ-B28, can bond strongly with CO2 due to Lewis acid-base interactions because the electron density is higher on their surfaces. In order to evaluate the capacity of these boron materials for CO2 capture, we also performed calculations with various degrees of CO2 coverage. The computational results indicate CO2 capture on the boron phases is a kinetically and thermodynamically feasible process, and therefore from this perspective these boron materials are predicted to be good candidates for CO2 capture.
Resumo:
This chapter was developed as part of the ‘People, communities and economies of the Lake Eyre Basin’ project. It has been written for communities, government agencies and interface organisations involved in natural resource management (NRM) in the Lake Eyre Basin (LEB). Its purpose is to identify the key factors for successful community engagement processes relevant to the LEB and present tools and principles for successful engagement processes. The term ‘interface organisation’ is used here to refer to the diverse range of local and regional organisations (such as Catchment Committees or NRM Regional Bodies) that serve as linkages, or translators, between local communities and broader Australian and State Governments. The importance of fostering and harnessing effective processes of community engagement has been identified as crucial to building a prosperous future for rural and remote regions in Australia. The chapter presents an overview of the literature on successful community engagement processes for NRM, as well as an overview of the current NRM arrangements in the LEB. The main part of the chapter presents findings of the series of interviews conducted with the government liaison officers representing both state and federal organisations who are responsible for coordinating and facilitating regional NRM in the LEB, and with the members of communities of the LEB.
Resumo:
The reaction of the aromatic distonic peroxyl radical cations N-methyl pyridinium-4-peroxyl (PyrOO center dot+) and 4-(N,N,N-trimethyl ammonium)-phenyl peroxyl (AnOO center dot+), with symmetrical dialkyl alkynes 10?ac was studied in the gas phase by mass spectrometry. PyrOO center dot+ and AnOO center dot+ were produced through reaction of the respective distonic aryl radical cations Pyr center dot+ and An center dot+ with oxygen, O2. For the reaction of Pyr center dot+ with O2 an absolute rate coefficient of k1=7.1X10-12 cm3 molecule-1 s-1 and a collision efficiency of 1.2?% was determined at 298 K. The strongly electrophilic PyrOO center dot+ reacts with 3-hexyne and 4-octyne with absolute rate coefficients of khexyne=1.5X10-10 cm3 molecule-1 s-1 and koctyne=2.8X10-10 cm3 molecule-1 s-1, respectively, at 298 K. The reaction of both PyrOO center dot+ and AnOO center dot+ proceeds by radical addition to the alkyne, whereas propargylic hydrogen abstraction was observed as a very minor pathway only in the reactions involving PyrOO center dot+. A major reaction pathway of the vinyl radicals 11 formed upon PyrOO center dot+ addition to the alkynes involves gamma-fragmentation of the peroxy O?O bond and formation of PyrO center dot+. The PyrO center dot+ is rapidly trapped by intermolecular hydrogen abstraction, presumably from a propargylic methylene group in the alkyne. The reaction of the less electrophilic AnOO center dot+ with alkynes is considerably slower and resulted in formation of AnO center dot+ as the only charged product. These findings suggest that electrophilic aromatic peroxyl radicals act as oxygen atom donors, which can be used to generate alpha-oxo carbenes 13 (or isomeric species) from alkynes in a single step. Besides gamma-fragmentation, a number of competing unimolecular dissociative reactions also occur in vinyl radicals 11. The potential energy diagrams of these reactions were explored with density functional theory and ab initio methods, which enabled identification of the chemical structures of the most important products.
Resumo:
Proton-bound dimers consisting of two glycerophospholipids with different headgroups were prepared using negative ion electrospray ionization and dissociated in a triple quadrupole mass spectrometer. Analysis of the tandem mass spectra of the dimers using the kinetic method provides, for the first time, an order of acidity for the phospholipid classes in the gas phase of PE < PA << PG < PS < PI. Hybrid density functional calculations on model phospholipids were used to predict the absolute deprotonation enthalpies of the phospholipid classes from isodesmic proton transfer reactions with phosphoric acid. The computational data largely support the experimental acidity trend, with the exception of the relative acidity ranking of the two most acidic phospholipid species. Possible causes of the discrepancy between experiment and theory are discussed and the experimental trend is recommended. The sequence of gas phase acidities for the phospholipid headgroups is found to (1) have little correlation with the relative ionization efficiencies of the phospholipid classes observed in the negative ion electrospray process, and (2) correlate well with fragmentation trends observed upon collisional activation of phospholipid \[M - H](-) anions. (c) 2005 American Society for Mass Spectrometry.