952 resultados para Produce
Resumo:
Epigenetic modifiers are the proteins involved in establishing and maintaining the epigenome of an organism. They are particularly important for development. Changes in epigenetic modifiers have been shown be lethal, or cause diseases. Our laboratory has developed an ENU mutagenesis screen to produce mouse mutants displaying altered epigenetic gene silencing. The screen relies on a GFP transgene that is expressed in red blood cells in a variegated manner. In the orginal transgenic FVB mice expression occurs in approximately 55% of red blood cells. During the course of my Masters, I characterised four different Mommes (Modifiers of murine metastable epiallele), MommeD32, MommeD33, MommeD35 and MommeD36. For each Momme, I identified the underlying mutation, and observed the corresponding phenotype. In MommeD32 the causative mutation is in Dnmt1, (DNA methyltransferase 1). This gene was previously identified in the screen, as MommeD2, and the new allele, MommeD32 has a change in the BAH domain of the protein. MommeD33 is the result of a change at the transgene itself. MommeD35 carries a mutation in Suv39h1 (suppressor of variegation 3-9 homolog 1). This gene has not previously been identified in the screen, but it is a known epigenetic modifier. MommeD36 had the same ENU treated sire as MommeD32, and I found that it has the same mutation as MommeD32. These mutant strains provide valuable tools that can be used to further our knowledge of epigenetic reprogramming. An example being the cancer study done with MommeD9 which has a mutation in Trim28. By crossing MommeD9+/- mutant mice with Trp53+/- mice, it can be seen if Trim28 has an effect on the rate of tumour genesis. However no clear effect of Trim28 haploinsufficiency can be observed in Trp53+/- mice.
Resumo:
This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.
Resumo:
Information communication and technology (ICT) systems are almost ubiquitous in the modern world. It is hard to identify any industry, or for that matter any part of society, that is not in some way dependent on these systems and their continued secure operation. Therefore the security of information infrastructures, both on an organisational and societal level, is of critical importance. Information security risk assessment is an essential part of ensuring that these systems are appropriately protected and positioned to deal with a rapidly changing threat environment. The complexity of these systems and their inter-dependencies however, introduces a similar complexity to the information security risk assessment task. This complexity suggests that information security risk assessment cannot, optimally, be undertaken manually. Information security risk assessment for individual components of the information infrastructure can be aided by the use of a software tool, a type of simulation, which concentrates on modelling failure rather than normal operational simulation. Avoiding the modelling of the operational system will once again reduce the level of complexity of the assessment task. The use of such a tool provides the opportunity to reuse information in many different ways by developing a repository of relevant information to aid in both risk assessment and management and governance and compliance activities. Widespread use of such a tool allows the opportunity for the risk models developed for individual information infrastructure components to be connected in order to develop a model of information security exposures across the entire information infrastructure. In this thesis conceptual and practical aspects of risk and its underlying epistemology are analysed to produce a model suitable for application to information security risk assessment. Based on this work prototype software has been developed to explore these concepts for information security risk assessment. Initial work has been carried out to investigate the use of this software for information security compliance and governance activities. Finally, an initial concept for extending the use of this approach across an information infrastructure is presented.
Resumo:
This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space
Resumo:
The Pomegranate Cycle is a practice-led enquiry consisting of a creative work and an exegesis. This project investigates the potential of self-directed, technologically mediated composition as a means of reconfiguring gender stereotypes within the operatic tradition. This practice confronts two primary stereotypes: the positioning of female performing bodies within narratives of violence and the absence of women from authorial roles that construct and regulate the operatic tradition. The Pomegranate Cycle redresses these stereotypes by presenting a new narrative trajectory of healing for its central character, and by placing the singer inside the role of composer and producer. During the twentieth and early twenty-first century, operatic and classical music institutions have resisted incorporating works of living composers into their repertory. Consequently, the canon’s historic representations of gender remain unchallenged. Historically and contemporarily, men have almost exclusively occupied the roles of composer, conductor, director and critic, and therefore men have regulated the pedagogy, performance practices, repertoire and organisations that sustain classical music. In this landscape, women are singers, and few have the means to challenge the constructions of gender they are asked to reproduce. The Pomegranate Cycle uses recording technologies as the means of driving change because these technologies have already challenged the regulation of the classical tradition by changing people’s modes of accessing, creating and interacting with music. Building on the work of artists including Phillips and van Veen, Robert Ashley and Diamanda Galas, The Pomegranate Cycle seeks to broaden the definition of what opera can be. This work examines the ways in which the operatic tradition can be hybridised with contemporary musical forms such as ambient electronica, glitch, spoken word and concrete sounds as a way of bringing the form into dialogue with contemporary music cultures. The ultilisation of other sound cultures within the context of opera enables women’s voices and stories to be presented in new ways, while also providing a point of friction with opera’s traditional storytelling devices. The Pomegranate Cycle simulates aesthetics associated with Western art music genres by drawing on contemporary recording techniques, virtual instruments and sound-processing plug-ins. Through such simulations, the work disrupts the way virtuosic human craft has been used to generate authenticity and regulate access to the institutions that protect and produce Western art music. The DIY approach to production, recording, composition and performance of The Pomegranate Cycle demonstrates that an opera can be realised by a single person. Access to the broader institutions which regulate the tradition are not necessary. In short, The Pomegranate Cycle establishes that a singer can be more than a voice and a performing body. She can be her own multimedia storyteller. Her audience can be anywhere.
Resumo:
Human activity-induced vibrations in slender structural sys tems become apparent in many different excitation modes and consequent action effects that cause discomfort to occupants, crowd panic and damage to public infrastructure. Resulting loss of public confidence in safety of structures, economic losses, cost of retrofit and repairs can be significant. Advanced computational and visualisation techniques enable engineers and architects to evolve bold and innovative structural forms, very often without precedence. New composite and hybrid materials that are making their presence in structural systems lack historical evidence of satisfactory performance over anticipated design life. These structural systems are susceptible to multi-modal and coupled excitation that are very complex and have inadequate design guidance in the present codes and good practice guides. Many incidents of amplified resonant response have been reported in buildings, footbridges, stadia a nd other crowded structures with adverse consequences. As a result, attenuation of human-induced vibration of innovative and slender structural systems very ofte n requires special studies during the design process. Dynamic activities possess variable characteristics and thereby induce complex responses in structures that are sensitive to parametric variations. Rigorous analytical techniques are available for investigation of such complex actions and responses to produce acceptable performance in structural systems. This paper presents an overview and a critique of existing code provisions for human-induced vibration followed by studies on the performance of three contrasting structural systems that exhibit complex vibration. The dynamic responses of these systems under human-induced vibrations have been carried out using experimentally validated computer simulation techniques. The outcomes of these studies will have engineering applications for safe and sustainable structures and a basis for developing design guidance.
Resumo:
1. Both dietary magnesium depletion and potassium depletion (confirmed by tissue analysis) were induced in rats which were then compared with rats treated with chlorothiazide (250 mg/kg diet) and rats on a control synthetic diet. 2. Brain and muscle intracellular pH was measured by using a surface coil and [31P]-NMR to measure the chemical shift of inorganic phosphate. pH was also measured in isolated perfused hearts from control and magnesium-deficient rats. Intracellular magnesium status was assessed by measuring the chemical shift of β-ATP in brain. 3. There was no evidence for magnesium deficiency in the chlorothiazide-treated rats on tissue analysis or on chemical shift of β-ATP in brain. Both magnesium and potassium deficiency, but not chlorothiazide treatment, were associated with an extracellular alkalosis. 4. Magnesium deficiency led to an intracellular alkalosis in brain, muscle and heart. Chlorothiazide treatment led to an alkalosis in brain. Potassium deficiency was associated with a normal intracellular pH in brain and muscle. 5. Magnesium depletion and chlorothiazide treatment produce intracellular alkalosis by unknown mechanism(s).
Resumo:
This paper examines the instances and motivations for noble cause corruption perpetrated by NSW police officers. Noble cause corruption occurs when a person tries to produce a just outcome through unjust methods, for example, police manipulating evidence to ensure a conviction of a known offender. Normal integrity regime initiatives are unlikely to halt noble cause corruption as its basis lies in an attempt to do good by compensating for the apparent flaws in an unjust system. This paper suggests that the solution lies in a change of culture through improved leadership and uses the political theories of Roger Myerson to propose a possible solution. Evidence from police officers in transcripts of the Wood Inquiry (1997) are examined to discern their participation in noble cause corruption and their rationalisation of this behaviour. The overall findings are that officers were motivated to indulge in this type of corruption through a desire to produce convictions where they felt the system unfairly worked against their ability to do their job correctly. We have added to the literature by demonstrating that the rewards can be positive. Police are seeking job satisfaction through the ability to convict the guilty. They will be able to do this through better equipment and investigative powers.
Resumo:
Internet Child Abuse: Current Research and Policy provides a timely overview of international policy, legislation and offender management and treatment practice in the area of Internet child abuse. Internet use has grown considerably over the last five years, and information technology now forms a core part of the formal education system in many countries. There is however, increasing evidence that the Internet is used by some adults to access children and young people in order to ‘groom’ them for the purposes of sexual abuse; as well as to produce and distribute indecent illegal images of children. This book presents and assesses the most recent and current research on internet child abuse, addressing: its nature, the behaviour and treatment of its perpetrators, international policy, legislation and protection, and policing. It will be required reading for an international audience of academics, researchers, policy-makers and criminal justice practitioners with interests in this area.
Resumo:
The world is facing problems due to the effects of increased atmospheric pollution, climate change and global warming. Innovative technologies to identify, quantify and assess fluxes exchange of the pollutant gases between the Earth’s surface and atmosphere are required. This paper proposes the development of a gas sensor system for a small UAV to monitor pollutant gases, collect data and geo-locate where the sample was taken. The prototype has two principal systems: a light portable gas sensor and an optional electric–solar powered UAV. The prototype will be suitable to: operate in the lower troposphere (100-500m); collect samples; stamp time and geo-locate each sample. One of the limitations of a small UAV is the limited power available therefore a small and low power consumption payload is designed and built for this research. The specific gases targeted in this research are NO2, mostly produce by traffic, and NH3 from farming, with concentrations above 0.05 ppm and 35 ppm respectively which are harmful to human health. The developed prototype will be a useful tool for scientists to analyse the behaviour and tendencies of pollutant gases producing more realistic models of them.
Resumo:
This paper considers VECMs for variables exhibiting cointegration and common features in the transitory components. While the presence of cointegration between the permanent components of series reduces the rank of the long-run multiplier matrix, a common feature among the transitory components leads to a rank reduction in the matrix summarizing short-run dynamics. The common feature also implies that there exists linear combinations of the first-differenced variables in a cointegrated VAR that are white noise and traditional tests focus on testing for this characteristic. An alternative, however, is to test the rank of the short-run dynamics matrix directly. Consequently, we use the literature on testing the rank of a matrix to produce some alternative test statistics. We also show that these are identical to one of the traditional tests. The performance of the different methods is illustrated in a Monte Carlo analysis which is then used to re-examine an existing empirical study. Finally, this approach is applied to provide a check for the presence of common dynamics in DSGE models.
Resumo:
Background Canonical serine protease inhibitors commonly bind to their targets through a rigid loop stabilised by an internal hydrogen bond network and disulfide bond(s). The smallest of these is sunflower trypsin inhibitor (SFTI-1), a potent and broad-range protease inhibitor. Recently, we re-engineered the contact β-sheet of SFTI-1 to produce a selective inhibitor of kallikrein-related peptidase 4 (KLK4), a protease associated with prostate cancer progression. However, modifications in the binding loop to achieve specificity may compromise structural rigidity and prevent re-engineered inhibitors from reaching optimal binding affinity. Methodology/Principal Findings In this study, the effect of amino acid substitutions on the internal hydrogen bonding network of SFTI were investigated using an in silico screen of inhibitor variants in complex with KLK4 or trypsin. Substitutions favouring internal hydrogen bond formation directly correlated with increased potency of inhibition in vitro. This produced a second generation inhibitor (SFTI-FCQR Asn14) which displayed both a 125-fold increased capacity to inhibit KLK4 (Ki = 0.0386±0.0060 nM) and enhanced selectivity over off-target serine proteases. Further, SFTI-FCQR Asn14 was stable in cell culture and bioavailable in mice when administered by intraperitoneal perfusion. Conclusion/Significance These findings highlight the importance of conserving structural rigidity of the binding loop in addition to optimising protease/inhibitor contacts when re-engineering canonical serine protease inhibitors.
Resumo:
Background Older people have higher rates of hospital admission than the general population and higher rates of readmission due to complications and falls. During hospitalisation, older people experience significant functional decline which impairs their future independence and quality of life. Acute hospital services comprise the largest section of health expenditure in Australia and prevention or delay of disease is known to produce more effective use of services. Current models of discharge planning and follow-up care, however, do not address the need to prevent deconditioning or functional decline. This paper describes the protocol of a randomised controlled trial which aims to evaluate innovative transitional care strategies to reduce unplanned readmissions and improve functional status, independence, and psycho-social well-being of community-based older people at risk of readmission. Methods/Design The study is a randomised controlled trial. Within 72 hours of hospital admission, a sample of older adults fitting the inclusion/exclusion criteria (aged 65 years and over, admitted with a medical diagnosis, able to walk independently for 3 meters, and at least one risk factor for readmission) are randomised into one of four groups: 1) the usual care control group, 2) the exercise and in-home/telephone follow-up intervention group, 3) the exercise only intervention group, or 4) the in-home/telephone follow-up only intervention group. The usual care control group receive usual discharge planning provided by the health service. In addition to usual care, the exercise and in-home/telephone follow-up intervention group receive an intervention consisting of a tailored exercise program, in-home visit and 24 week telephone follow-up by a gerontic nurse. The exercise only and in-home/telephone follow-up only intervention groups, in addition to usual care receive only the exercise or gerontic nurse components of the intervention respectively. Data collection is undertaken at baseline within 72 hours of hospital admission, 4 weeks following hospital discharge, 12 weeks following hospital discharge, and 24 weeks following hospital discharge. Outcome assessors are blinded to group allocation. Primary outcomes are emergency hospital readmissions and health service use, functional status, psychosocial well-being and cost effectiveness. Discussion The acute hospital sector comprises the largest component of health care system expenditure in developed countries, and older adults are the most frequent consumers. There are few trials to demonstrate effective models of transitional care to prevent emergency readmissions, loss of functional ability and independence in this population following an acute hospital admission. This study aims to address that gap and provide information for future health service planning which meets client needs and lowers the use of acute care services.
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Peeling is an essential phase of post harvesting and processing industry; however undesirable processing losses are unavoidable and always have been the main concern of food processing sector. There are three methods of peeling fruits and vegetables including mechanical, chemical and thermal, depending on the class and type of fruit. By comparison, the mechanical methods are the most preferred; mechanical peeling methods do not create any harmful effects on the tissue and they keep edible portions of produce fresh. The main disadvantage of mechanical peeling is the rate of material loss and deformations. Obviously reducing material losses and increasing the quality of the process has a direct effect on the whole efficiency of food processing industry, this needs more study on technological aspects of these operations. In order to enhance the effectiveness of food industrial practices it is essential to have a clear understanding of material properties and behaviour of tissues under industrial processes. This paper presents the scheme of research that seeks to examine tissue damage of tough skinned vegetables under mechanical peeling process by developing a novel FE model of the process using explicit dynamic finite element analysis approach. A computer model of mechanical peeling process will be developed in this study to stimulate the energy consumption and stress strain interactions of cutter and tissue. The available Finite Element softwares and methods will be applied to establish the model. Improving the knowledge of interactions and involves variables in food operation particularly in peeling process is the main objectives of the proposed study. Understanding of these interrelationships will help researchers and designer of food processing equipments to develop new and more efficient technologies. Presented work intends to review available literature and previous works has been done in this area of research and identify current gap in modelling and simulation of food processes.