372 resultados para Burnt area mapping
Resumo:
A precise representation of the spatial distribution of hydrophobicity, hydrophilicity and charges on the molecular surface of proteins is critical for the understanding of the interaction with small molecules and larger systems. The representation of hydrophobicity is rarely done at atom-level, as this property is generally assigned to residues. A new methodology for the derivation of atomic hydrophobicity from any amino acid-based hydrophobicity scale was used to derive 8 sets of atomic hydrophobicities, one of which was used to generate the molecular surfaces for 35 proteins with convex structures, 5 of which, i.e., lysozyme, ribonuclease, hemoglobin, albumin and IgG, have been analyzed in more detail. Sets of the molecular surfaces of the model proteins have been constructed using spherical probes with increasingly large radii, from 1.4 to 20 A˚, followed by the quantification of (i) the surface hydrophobicity; (ii) their respective molecular surface areas, i.e., total, hydrophilic and hydrophobic area; and (iii) their relative densities, i.e., divided by the total molecular area; or specific densities, i.e., divided by property-specific area. Compared with the amino acid-based formalism, the atom-level description reveals molecular surfaces which (i) present an approximately two times more hydrophilic areas; with (ii) less extended, but between 2 to 5 times more intense hydrophilic patches; and (iii) 3 to 20 times more extended hydrophobic areas. The hydrophobic areas are also approximately 2 times more hydrophobicity-intense. This, more pronounced "leopard skin"-like, design of the protein molecular surface has been confirmed by comparing the results for a restricted set of homologous proteins, i.e., hemoglobins diverging by only one residue (Trp37). These results suggest that the representation of hydrophobicity on the protein molecular surfaces at atom-level resolution, coupled with the probing of the molecular surface at different geometric resolutions, can capture processes that are otherwise obscured to the amino acid-based formalism.
Resumo:
description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
This PhD project has expanded the knowledge in the area of profluorescent nitroxides with regard to the synthesis and characterisations of novel profluorescent nitroxide probes as well as physical characterisation of the probe molecules in various polymer/physical environments. The synthesis of the first example of an azaphenalene-based fused aromatic nitroxide TMAO, [1,1,3,3-tetramethyl-2,3-dihydro-2-azaphenalen-2-yloxyl, was described. This novel nitroxide possesses some of the structural rigidity of the isoindoline class of nitroxides, as well as some properties akin to TEMPO nitroxides. Additionally, the integral aromatic ring imparts fluorescence that is switched on by radical scavenging reactions of the nitroxide, which makes it a sensitive probe for polymer degradation. In addition to the parent TMAO, 5 other azaphenalene derivatives were successfully synthesised. This new class of nitroxide was expected to have interesting redox properties when the structure was investigated by high-level ab initio molecular orbitals theory. This was expected to have implications with biological relevance as the calculated redox potentials for the azaphenalene ring class would make them potent antioxidant compounds. The redox potentials of 25 cyclic nitroxides from four different structural classes (pyrroline, piperidine, isoindoline and azaphenalene) were determined by cyclic voltammetry in acetonitrile. It was shown that potentials related to the one electron processes of the nitroxide were influenced by the type of ring system, ring substituents or groups surrounding the moiety. Favourable comparisons were found between theoretical and experimental potentials for pyrroline, piperidine and isoindoline ring classes. Substitution of these ring classes, were correctly calculated to have a small yet predictable effect on the potentials. The redox potentials of the azaphenalene ring class were underestimated by the calculations in all cases by at least a factor of two. This is believed to be due to another process influencing the redox potentials of the azaphenalene ring class which is not taken into account by the theoretical model. It was also possible to demonstrate the use of both azaphenalene and isoindoline nitroxides as additives for monitoring radical mediated damage that occurs in polypropylene as well as in more commercially relevant polyester resins. Polymer sample doped with nitroxide were exposed to both thermo-and photo-oxidative conditions with all nitroxides showing a protective effect. It was found that isoindoline nitroxides were able to indicate radical formation in polypropylene aged at elevated temperatures via fluorescence build-up. The azaphenalene nitroxide TMAO showed no such build-up of fluorescence. This was believed to be due to the more labile bond between the nitroxide and macromolecule and the protection may occur through a classical Denisov cycle, as is expected for commercially available HAS units. Finally, A new profluorescent dinitroxide, BTMIOA (9,10-bis(1,1,3,3- tetramethylisoindolin-2-yloxyl-5-yl)anthracene), was synthesised and shown to be a powerful probe for detecting changes during the initial stages of thermo-oxidative degradation of polypropylene. This probe, which contains a 9,10-diphenylanthracene core linked to two nitroxides, possesses strongly suppressed fluorescence due to quenching by the two nitroxide groups. This molecule also showed the greatest protective effect on thermo-oxidativly aged polypropylene. Most importantly, BTMIOA was found to be a valuable tool for imaging and mapping free-radical generation in polypropylene using fluorescence microscopy.
Resumo:
This chapter reports on Australian and Swedish experiences in the iterative design, development, and ongoing use of interactive educational systems we call ‘Media Maps.’ Like maps in general, Media Maps are usefully understood as complex cultural technologies; that is, they are not only physical objects, tools and artefacts, but also information creation and distribution technologies, the use and development of which are embedded in systems of knowledge and social meaning. Drawing upon Australian and Swedish experiences with one Media Map technology, this paper illustrates this three-layered approach to the development of media mapping. It shows how media mapping is being used to create authentic learning experiences for students preparing for work in the rapidly evolving media and communication industries. We also contextualise media mapping as a response to various challenges for curriculum and learning design in Media and Communication Studies that arise from shifts in tertiary education policy in a global knowledge economy.
Resumo:
With the rising levels of CO2 in the atmosphere, low-emission technologies with carbon dioxide capture and storage (CCS) provide one option for transforming the global energy infrastructure into a more environmentally, climate sustainable system. However, like many technology innovations, there is a social risk to the acceptance of CCS. This article presents the findings of an engagement process using facilitated workshops conducted in two communities in rural Queensland, Australia, where a demonstration project for IGCC with CCS has been announced. The findings demonstrate that workshop participants were concerned about climate change and wanted leadership from government and industry to address the issue. After the workshops, participants reported increased knowledge and more positive attitudes towards CCS, expressing support for the demonstration project to continue in their local area. The process developed is one that could be utilized around the world to successfully engage communities on the low carbon emission technology options.
Resumo:
As regulators, governments are often criticised for over‐regulating industries. This research project seeks to examine the regulation affecting the construction industry in a federal system of government. It uses a case study of the Australian system of government to focus on the question of the implications of regulation in the construction industry. Having established the extent of the regulatory environment, the research project considers the costs associated with this environment. Consequently, ways in which the regulatory burden on industry can be reduced are evaluated. The Construction Industry Business Environment project is working with industry and government agencies to improve regulatory harmonisation in Australia, and thereby reduce the regulatory burden on industry. It is found that while taxation and compliance costs are not likely to be reduced in the short term, costs arising from having to adapt to variation between regulatory regimes in a federal system of government, seem the most promising way of reducing regulatory costs. Identifying and reducing adaptive costs across jurisdictional are argued to present a novel approach to regulatory reform.
Resumo:
There are currently a number of issues of great importance affecting universities and the way in which their programs are now offered. Many issues are largely being driven top-down and impact both at a university-wide and at an individual discipline level. This paper provides a brief history of cartography and digital mapping education at the Queensland University of Technology (QUT). It also provides an overview of what is curriculum mapping and presents some interesting findings from the program review process. Further, this review process has triggered discussion and action for the review, mapping and embedding of graduate attributes within the spatial science major program. Some form of practical based learning is expected in vocationally oriented degrees that lead to professional accreditation and are generally regarded as a good learning exposure. With the restructure of academic programs across the Faculty of Built Environment and Engineering in 2006, spatial science and surveying students now undertake a formal work integrated learning unit. There is little doubt that students acquire the skills of their discipline (mapping science, spatial) by being immersed in the industry culture- learning how to process information and solve real-world problems within context. The broad theme of where geo-spatial mapping skills are embedded in this broad-based tertiary education course are examined with some focused discussion on the learning objectives, outcomes and examples of some student learning experiences
Resumo:
Network crawling and visualisation tools and other datamining systems are now advanced enough to provide significant new impulses to the study of cultural activity on the Web. A growing range of studies focus on communicative processes in the blogosphere – including for example Adamic & Glance’s 2005 map of political allegiances during the 2004 U.S. presidential election and Kelly & Etling’s 2008 study of blogging practices in Iran. There remain a number of significant shortcomings in the application of such tools and methodologies to the study of blogging; these relate both to how the content of blogs is analysed, and to how the network maps resulting from such studies are understood. Our project highlights and addresses such shortcomings.