9 resultados para Add-06_4
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
Marine sponges have been an abundant source of new metabolites in recent years. The symbiotic association between the bacteria and the sponge has enabled scientists to access the bacterial diversity present within the bacterial/sponge ecosystem. This study has focussed on accessing the bacterial diversity in two Irish coastal marine sponges, namely Amphilectus fucorum and Eurypon major. A novel species from the genus Aquimarina has been isolated from the sponge Amphilectus fucorum. The study has also resulted in the identification of an α–Proteobacteria, Pseudovibrio sp. as a potential producer of antibiotics. Thus a targeted based approach to specifically cultivate Pseudovibrio sp. may prove useful for the development of new metabolites from this particular genus. Bacterial isolates from the marine sponge Haliclona simulans were screened for anti–fungal activity and one isolate namely Streptomyces sp. SM8 displayed activity against all five fungal strains tested. The strain was also tested for anti–bacterial activity and it showed activity against both against B. subtilis and P. aeruginosa. Hence a combinatorial approach involving both biochemical and genomic approaches were employed in an attempt to identify the bioactive compounds with these activities which were being produced by this strain. Culture broths from Streptomyces sp. SM8 were extracted and purified by various techniques such as reverse–phase HPLC, MPLC and ash chromatography. Anti–bacterial activity was observed in a fraction which contained a hydroxylated saturated fatty acid and also another compound with a m/z 227 but further structural elucidation of these compounds proved unsuccessful. The anti–fungal fractions from SM8 were shown to contain antimycin–like compounds, with some of these compounds having different retention times from that of an antimycin standard. A high–throughput assay was developed to screen for novel calcineurin inhibitors using yeast as a model system and three putative bacterial extracts were found to be positive using this screen. One of these extracts from SM8 was subsequently analysed using NMR and the calcineurin inhibition activity was con rmed to belong to a butenolide type compound. A H. simulans metagenomic library was also screened using the novel calcineurin inhibitor high–throughput assay system and eight clones displaying putative calcineurin inhibitory activity were detected. The clone which displayed the best inhibitory activity was subsequently sequenced and following the use of other genetic based approaches it became clear that the inhibition was being caused by a hypothetical protein with similarity to a hypothetical Na+/Ca2+ exchanger protein. The Streptomyces sp. SM8 genome was sequenced from a fragment library using Roche 454 pyrosequencing technology to identify potential secondary metabolism clusters. The draft genome was annotated by IMG/ER using the Prodigal pipeline. The Whole Genome Shotgun project has been deposited at DDBJ/EMBL/GenBank under the accession AMPN00000000. The genome contains genes which appear to encode for several polyketide synthases (PKS), non–ribosomal peptide synthetases (NRPS), terpene and siderophore biosynthesis and ribosomal peptides. Transcriptional analyses led to the identification of three hybrid clusters of which one is predicted to be involved in the synthesis of antimycin, while the functions of the others are as yet unknown. Two NRPS clusters were also identified, of which one may be involved in gramicidin biosynthesis and the function of the other is unknown. A Streptomyces sp. SM8 NRPS antC gene knockout was constructed and extracts from the strain were shown to possess a mild anti–fungal activity when compared to the SM8 wild–type. Subsequent LCMS analysis of antC mutant extracts confirmed the absence of the antimycin in the extract proving that the observed anti–fungal activity may involve metabolite(s) other than antimycin. Anti–bacterial activity in the antC gene knockout strain against P. aeruginosa was reduced when compared to the SM8 wild–type indicating that antimycin may be contributing to the observed anti–bacterial activity in addition to the metabolite(s) already identified during the chemical analyses. This is the first report of antimycins exhibiting anti–bacterial activity against P. aeruginosa. One of the hybrid clusters potentially involved in secondary metabolism in SM8 that displayed high and consistent levels of gene–expression in RNA studies was analysed in an attempt to identify the metabolite being produced by the pathway. A number of unusual features were observed following bioinformatics analysis of the gene sequence of the cluster, including a formylation domain within the NRPS cluster which may add a formyl group to the growing chain. Another unusual feature is the lack of AT domains on two of the PKS modules. Other unusual features observed in this cluster is the lack of a KR domain in module 3 of the cluster and an aminotransferase domain in module 4 for which no clear role has been hypothesised.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
A novel deposition process named CoBlastTM, based on grit blasting technology, has been used to deposit hydroxyapatite (HA) onto titanium (Ti) metal using a dopant/abrasive regime. The various powders (HA powder, apatitic abrasives) and the treated substrates were characterised for chemical composition, coating coverage, crystallinity and topography including surface roughness. The surface roughness of the HA surfaces could be altered using apatitic abrasives of different particle sizes. Compared to the standard plasma spraying process, the CoBlast surface produced excellent coating adhesion, lower dissolution, higher levels of mechanical and chemical stability in stimulated body fluid (SBF). Enhanced viability of osteoblastic cells was also observed on the CoBlast HA surfaces compared to the microblast and untreated Ti as well as the plasma HA coating. CoBlast offers an alternative to the traditional methods of coating HA implants with added versatility. Apatites substituted with antimicrobial metals can also be deposited to add functionality to HA coatings without cytotoxicty. The potential use of these coatings as an infection preventing strategy for application on hard tissue implants was assessed in vitro and also in vivo. Surface physicochemical properties and morphology were determined in addition to surface cytocompatibility assessments using a MG-63 osteoblast cell line. The antibacterial potential of the immobilised metal ion on the surface and the eluted ion to a lesser extent, contributed to the anticolonising behaviour of the surfaces against a standard bacteria strain (S. aureus) as well as a number of clinically relevant strains (MRSA, MSSA and S. epidermis). The results revealed that the surfaces coated with silver substituted apatites (AgA) outperformed the other apatites examined (apatites loaded with Zn, Sr and both Ag and Sr ions). Assessment of bacterial adherence on coated K-wires following subcutaneous implantation in a nude mouse infection model (S. aureus) for two days demonstrated that the 12% wt surface outperformed the 5% wt AgA coating. Lower inflammatory responses were activated with the insertion of the Ag loaded K-wires with a localised infection at the implantation site noted over the two day study period. These results indicated that the AgA coating on the surface of orthopaedic implants demonstrate good biocompatibility whilst inhibiting bacterial adhesion and colonising of the implant surface.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
The use of unmalted oats or sorghum in brewing has great potential for creating new beer types/flavors and saving costs. However, the substitution of barley malt with oat or sorghum adjunct is not only innovative but also challenging due to their specific grain characteristics. The overall objectives of this Ph.D. project were: 1) to investigate the impact of various types and levels of oats or sorghum on the quality/processability of mashes, worts, and beers; 2) to provide solutions as regards the application of industrial enzymes to overcome potential brewing problems. For these purposes, a highly precise rheological method using a controlled stress rheometer was developed and successfully applied as a tool for optimizing enzyme additions and process parameters. Further, eight different oat cultivars were compared in terms of their suitability as brewing adjuncts and two very promising types identified. In another study, the limitations of barley malt enzymes and the benefits of the application of industrial enzymes in high-gravity brewing with oats were determined. It is recommended to add enzymes to high-gravity mashes when substituting 30% or more barley malt with oats in order to prevent filtration and fermentation problems. Pilot-scale brewing trials using 10–40% unmalted oats revealed that the sensory quality of oat beers improved with increasing adjunct level. In addition, commercially available oat and sorghum flours were implemented into brewing. The use of up to 70% oat flour and 50% sorghum flour, respectively, is not only technically feasible but also economically beneficial. In a further study on sorghum was demonstrated that the optimization of industrial mashing enzymes has great potential for reducing beer production costs. A comparison of the brewing performance of red Italian and white Nigerian sorghum clearly showed that European grown sorghum is suitable for brewing purposes; 40% red sorghum beers were even found to be very low in gluten.
Resumo:
The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.
Resumo:
Open environments involve distributed entities interacting with each other in an open manner. Many distributed entities are unknown to each other but need to collaborate and share resources in a secure fashion. Usually resource owners alone decide who is trusted to access their resources. Since resource owners in open environments do not have a complete picture of all trusted entities, trust management frameworks are used to ensure that only authorized entities will access requested resources. Every trust management system has limitations, and the limitations can be exploited by malicious entities. One vulnerability is due to the lack of globally unique interpretation for permission specifications. This limitation means that a malicious entity which receives a permission in one domain may misuse the permission in another domain via some deceptive but apparently authorized route; this malicious behaviour is called subterfuge. This thesis develops a secure approach, Subterfuge Safe Trust Management (SSTM), that prevents subterfuge by malicious entities. SSTM employs the Subterfuge Safe Authorization Language (SSAL) which uses the idea of a local permission with a globally unique interpretation (localPermission) to resolve the misinterpretation of permissions. We model and implement SSAL with an ontology-based approach, SSALO, which provides a generic representation for knowledge related to the SSAL-based security policy. SSALO enables integration of heterogeneous security policies which is useful for secure cooperation among principals in open environments where each principal may have a different security policy with different implementation. The other advantage of an ontology-based approach is the Open World Assumption, whereby reasoning over an existing security policy is easily extended to include further security policies that might be discovered in an open distributed environment. We add two extra SSAL rules to support dynamic coalition formation and secure cooperation among coalitions. Secure federation of cloud computing platforms and secure federation of XMPP servers are presented as case studies of SSTM. The results show that SSTM provides robust accountability for the use of permissions in federation. It is also shown that SSAL is a suitable policy language to express the subterfuge-safe policy statements due to its well-defined semantics, ease of use, and integrability.
Resumo:
The insider threat is a security problem that is well-known and has a long history, yet it still remains an invisible enemy. Insiders know the security processes and have accesses that allow them to easily cover their tracks. In recent years the idea of monitoring separately for these threats has come into its own. However, the tools currently in use have disadvantages and one of the most effective techniques of human review is costly. This paper explores the development of an intelligent agent that uses already in-place computing material for inference as an inexpensive monitoring tool for insider threats. Design Science Research (DSR) is a methodology used to explore and develop an IT artifact, such as for this intelligent agent research. This methodology allows for a structure that can guide a deep search method for problems that may not be possible to solve or could add to a phenomenological instantiation.