952 resultados para Plant data
Resumo:
Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARDC
Contextualizing the tensions and weaknesses of information privacy and data breach notification laws
Resumo:
Data breach notification laws have detailed numerous failures relating to the protection of personal information that have blighted both corporate and governmental institutions. There are obvious parallels between data breach notification and information privacy law as they both involve the protection of personal information. However, a closer examination of both laws reveals conceptual differences that give rise to vertical tensions between each law and shared horizontal weaknesses within both laws. Tensions emanate from conflicting approaches to the implementation of information privacy law that results in different regimes and the implementation of different types of protections. Shared weaknesses arise from an overt focus on specified types of personal information which results in ‘one size fits all’ legal remedies. The author contends that a greater contextual approach which promotes the importance of social context is required and highlights the effect that contextualization could have on both laws.
Resumo:
Mandatory data breach notification has become a matter of increasing concern for law reformers. In Australia, this issue was recently addressed as part of a comprehensive review of privacy law conducted by the Australian Law Reform Commission (ALRC) which recommended a uniform national regime for protecting personal information applicable to both the public and private sectors. As in all federal systems, the distribution of powers between central and state governments poses problems for national consistency. In the authors’ view, a uniform approach to mandatory data breach notification has greater merit than a ‘jurisdiction specific’ approach epitomized by US state-based laws. The US response has given rise to unnecessary overlaps and inefficiencies as demonstrated by a review of different notification triggers and encryption safe harbors. Reviewing the US response, the authors conclude that a uniform approach to data breach notification is inherently more efficient.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
This paper investigates how to interface the wireless application protocol (WAP) architecture to the SCADA system running distributed network protocol (DNP) in a power process plant. DNP is a well-developed protocol to be applied in the supervisory control and data acquisition (SCADA) system but the system control centre and remote terminal units (RTUs) are presently connected through a local area network. The conditions in a process plant are harsh and the site is remote. Resources for data communication are difficult to obtain under these conditions, thus, a wireless channel communication through a mobile phone is practical and efficient in a process plant environment. The mobile communication industries and the public have a strong interest in the WAP technology application in mobile phone networks and the WAP application programming interface (API) in power industry applications is one area that requires extensive investigation.
Resumo:
Acoustic emission (AE) technique is one of the popular diagnostic techniques used for structural health monitoring of mechanical, aerospace and civil structures. But several challenges still exist in successful application of AE technique. This paper explores various tools for analysis of recorded AE data to address two primary challenges: discriminating spurious signals from genuine signals and devising ways to quantify damage levels.
Resumo:
Cell invasion involves a population of cells which are motile and proliferative. Traditional discrete models of proliferation involve agents depositing daughter agents on nearest- neighbor lattice sites. Motivated by time-lapse images of cell invasion, we propose and analyze two new discrete proliferation models in the context of an exclusion process with an undirected motility mechanism. These discrete models are related to a family of reaction- diffusion equations and can be used to make predictions over a range of scales appropriate for interpreting experimental data. The new proliferation mechanisms are biologically relevant and mathematically convenient as the continuum-discrete relationship is more robust for the new proliferation mechanisms relative to traditional approaches.
Resumo:
Gen Y beginning teachers have an edge: they’ve grown up in an era of educational accountability, so when their students have to sit a high-stakes test, they can relate.
Resumo:
The Arabidopsis thaliana NPR1 has been shown to be a key regulator of gene expression during the onset of a plant disease-resistance response known as systemic acquired resistance. The npr1 mutant plants fail to respond to systemic acquired resistance-inducing signals such as salicylic acid (SA), or express SA-induced pathogenesis-related (PR) genes. Using NPR1 as bait in a yeast two-hybrid screen, we identified a subclass of transcription factors in the basic leucine zipper protein family (AHBP-1b and TGA6) and showed that they interact specifically in yeast and in vitro with NPR1. Point mutations that abolish the NPR1 function in A. thaliana also impair the interactions between NPR1 and the transcription factors in the yeast two-hybrid assay. Furthermore, a gel mobility shift assay showed that the purified transcription factor protein, AHBP-1b, binds specifically to an SA-responsive promoter element of the A. thaliana PR-1 gene. These data suggest that NPR1 may regulate PR-1 gene expression by interacting with a subclass of basic leucine zipper protein transcription factors.
Resumo:
In plant cells, myosin is believed to be the molecular motor responsible for actin-based motility processes such as cytoplasmic streaming and directed vesicle transport. In an effort to characterize plant myosin, a cDNA encoding a myosin heavy chain was isolated from Arabidopsis thaliana. The predicted product of the MYA1 gene is 173 kDa and is structurally similar to the class V myosins. It is composed of the highly-conserved NH2-terminal "head" domain, a putative calmodulin-binding "neck" domain an alpha-helical coiled-coil domain, and a COOH-terminal domain. Northern blot analysis shows that the Arabidopsis MYA1 gene is expressed in all the major plant tissues (flower, leaf, root, and stem). We suggest that the MYA1 myosin may be involved in a general intracellular transport process in plant cells.
Resumo:
A simple mathematical model is presented to describe the cell separation process that plants undertake in order to deliberately shed organs. The focus here is on modelling the production of the enzyme polygalacturonase, which breaks down pectin that provides natural cell-to-cell adhesion in the localised abscission zone. A coupled system of three ordinary differential equations is given for a single cell, and then extended to hold for a layer of cells in the abscission zone. Simple observations are made based on the results of this preliminary model and, furthermore, a number of opportunities for applied mathematicians to make contributions in this subject area are discussed.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Increases in atmospheric concentrations of the greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) due to human activities have been linked to climate change. GHG emissions from land use change and agriculture have been identified as significant contributors to both Australia’s and the global GHG budget. This is expected to increase over the coming decades as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on CO2, CH4 and N2O trace gas fluxes from subtropical or tropical soils and land uses. To develop effective mitigation strategies a full global warming potential (GWP) accounting methodology is required that includes emissions of the three primary greenhouse gases. Mitigation strategies that focus on one gas only can inadvertently increase emissions of another. For this reason, detailed inventories of GHGs from soils and vegetation under individual land uses are urgently required for subtropical Australia. This study aimed to quantify GHG emissions over two consecutive years from three major land uses; a well-established, unfertilized subtropical grass-legume pasture, a 30 year (lychee) orchard and a remnant subtropical Gallery rainforest, all located near Mooloolah, Queensland. GHG fluxes were measured using a combination of high resolution automated sampling, coarser spatial manual sampling and laboratory incubations. Comparison between the land uses revealed that land use change can have a substantial impact on the GWP on a landscape long after the deforestation event. The conversion of rainforest to agricultural land resulted in as much as a 17 fold increase in GWP, from 251 kg CO2 eq. ha-1 yr-1 in the rainforest to 889 kg CO2 eq. ha-1 yr-1 in the pasture to 2538 kg CO2 eq. ha-1 yr-1 in the lychee plantation. This increase resulted from altered N cycling and a reduction in the aerobic capacity of the soil in the pasture and lychee systems, enhancing denitrification and nitrification events, and reducing atmospheric CH4 uptake in the soil. High infiltration, drainage and subsequent soil aeration under the rainforest limited N2O loss, as well as promoting CH4 uptake of 11.2 g CH4-C ha-1 day-1. This was among the highest reported for rainforest systems, indicating that aerated subtropical rainforests can act as substantial sink of CH4. Interannual climatic variation resulted in significantly higher N2O emission from the pasture during 2008 (5.7 g N2O-N ha day) compared to 2007 (3.9 g N2O-N ha day), despite receiving nearly 500 mm less rainfall. Nitrous oxide emissions from the pasture were highest during the summer months and were highly episodic, related more to the magnitude and distribution of rain events rather than soil moisture alone. Mean N2O emissions from the lychee plantation increased from an average of 4.0 g N2O-N ha-1 day-1, to 19.8 g N2O-N ha-1 day-1 following a split application of N fertilizer (560 kg N ha-1, equivalent to 1 kg N tree-1). The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (emission factor (EF): 1.79%) compared to autumn (EF: 0.91%). This was attributed to the hot and moist climatic conditions and a reduction in plant N uptake during the spring creating conditions conducive to N2O loss. These findings demonstrate that land use change in subtropical Australia can be a significant source of GHGs. Moreover, the study shows that modifying the timing of fertilizer application can be an efficient way of reducing GHG emissions from subtropical horticulture.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.