526 resultados para Fertilizer application
Resumo:
Recent studies of gene silencing in plants have revealed two RNA-mediated epigenetic processes, RNA-directed RNA degradation and RNA-directed DNA methylation. These natural processes have provided new avenues for developing high-efficiency, high-throughput technology for gene suppression in plants.
Resumo:
Roofing tile manufacturing is a mass production process with high operational and inventory wastes and costs. Due to huge operational costs, excessive inventory and wastes, and quality problems, roofing tile manufacturers are trying to implement lean manufacturing practice in their operations in order to remain competitive in an ncreasingly competitive global market. The aim of this research is to evaluate the possibility of reducing the operational and inventory costs of the tile manufacturing process through waste minimization. This paper analyses the current waste situation in a tile manufacturing process and develops current and future value stream mapping for such a process with a view to implementing lean principles in manufacturing. The focus of the approach is on cost reduction by eliminating non-value-added activities.
Resumo:
Sheryl Jackson looks at the decision of Justice McMeekin in Northbound Property Group Pty Ltd v Carosi (No.2) [2013] QSC 189.
Resumo:
Diagnostics of rolling element bearings involves a combination of different techniques of signal enhancing and analysis. The most common procedure presents a first step of order tracking and synchronous averaging, able to remove the undesired components, synchronous with the shaft harmonics, from the signal, and a final step of envelope analysis to obtain the squared envelope spectrum. This indicator has been studied thoroughly, and statistically based criteria have been obtained, in order to identify damaged bearings. The statistical thresholds are valid only if all the deterministic components in the signal have been removed. Unfortunately, in various industrial applications, characterized by heterogeneous vibration sources, the first step of synchronous averaging is not sufficient to eliminate completely the deterministic components and an additional step of pre-whitening is needed before the envelope analysis. Different techniques have been proposed in the past with this aim: The most widely spread are linear prediction filters and spectral kurtosis. Recently, a new technique for pre-whitening has been proposed, based on cepstral analysis: the so-called cepstrum pre-whitening. Owing to its low computational requirements and its simplicity, it seems a good candidate to perform the intermediate pre-whitening step in an automatic damage recognition algorithm. In this paper, the effectiveness of the new technique will be tested on the data measured on a full-scale industrial bearing test-rig, able to reproduce the harsh conditions of operation. A benchmark comparison with the traditional pre-whitening techniques will be made, as a final step for the verification of the potentiality of the cepstrum pre-whitening.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
BACKGROUND Silver dressings have been widely and successfully used to prevent cutaneous wounds, including burns, chronic ulcers, dermatitis and other cutaneous conditions, from infection. However, in a few cases, skin discolouration or argyria-like appearances have been reported. This study investigated the level of silver in scar tissue post-burn injury following application of Acticoat, a silver dressing. METHODS A porcine deep dermal partial thickness burn model was used. Burn wounds were treated with this silver dressing until completion of re-epithelialization, and silver levels were measured in a total of 160 scars and normal tissues. RESULTS The mean level of silver in scar tissue covered with silver dressings was 136 microg/g, while the silver level in normal skin was less than 0.747 microg/g. A number of wounds had a slate-grey appearance, and dissection of the scars revealed brown-black pigment mostly in the middle and deep dermis within the scar. The level of silver and the severity of the slate-grey discolouration were correlated with the length of time of the silver dressing application. CONCLUSIONS These results show that silver deposition in cutaneous scar tissue is a common phenomenon, and higher levels of silver deposits and severe skin discolouration are correlated with an increase in the duration of this silver dressing application.
Resumo:
Silver dressings have been widely used to successfully prevent burn wound infection and sepsis. However, a few case studies have reported the functional abnormality and failure of vital organs, possibly caused by silver deposits. The aim of this study was to investigate the serum silver level in the pediatric burn population and also in several internal organs in a porcine burn model after the application of Acticoat. A total of 125 blood samples were collected from 46 pediatric burn patients. Thirty-six patients with a mean of 13.4% TBSA burns had a mean peak serum silver level of 114 microg/L, whereas 10 patients with a mean of 1.85% TBSA burns had an undetectable level of silver (<5.4 microg/L). Overall, serum silver levels were closely related to burn sizes. However, the highest serum silver was 735 microg/L in a 15-month-old toddler with 10% TBSA burns and the second highest was 367 microg/L in a 3-year old with 28% TBSA burns. In a porcine model with 2% TBSA burns, the mean peak silver level was 38 microg/L at 2 to 3 weeks after application of Acticoat and was then significantly reduced to an almost undetectable level at 6 weeks. Of a total of four pigs, silver was detected in all four livers (1.413 microg/g) and all four hearts (0.342 microg/g), three of four kidneys (1.113 microg/g), and two of four brains (0.402 microg/g). This result demonstrated that although variable, the level of serum silver was positively associated with the size of burns, and significant amounts of silver were deposited in internal organs in pigs with only 2% TBSA burns, after application of Acticoat.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Rigid lenses, which were originally made from glass (between 1888 and 1940) and later from polymethyl methacrylate or silicone acrylate materials, are uncomfortable to wear and are now seldom fitted to new patients. Contact lenses became a popular mode of ophthalmic refractive error correction following the discovery of the first hydrogel material – hydroxyethyl methacrylate – by Czech chemist Otto Wichterle in 1960. To satisfy the requirements for ocular biocompatibility, contact lenses must be transparent and optically stable (for clear vision), have a low elastic modulus (for good comfort), have a hydrophilic surface (for good wettability), and be permeable to certain metabolites, especially oxygen, to allow for normal corneal metabolism and respiration during lens wear. A major breakthrough in respect of the last of these requirements was the development of silicone hydrogel soft lenses in 1999 and techniques for making the surface hydrophilic. The vast majority of contact lenses distributed worldwide are mass-produced using cast molding, although spin casting is also used. These advanced mass-production techniques have facilitated the frequent disposal of contact lenses, leading to improvements in ocular health and fewer complications. More than one-third of all soft contact lenses sold today are designed to be discarded daily (i.e., ‘daily disposable’ lenses).
Resumo:
With new developments in battery technologies, increasing application of Battery Energy Storage System (BESS) in power system is anticipated in near future. BESS has already been used for primary frequency regulation in the past. This paper examines the feasibility of using BESS with load shedding, in application for large disturbances in power system. Load shedding is one of the conventional ways during large disturbances, and the performance of frequency control will increase in combination with BESS application. According to the latest news, BESS which are applied in high power side will be employed in practice in next 5 year. A simple low order SMR model is used as a test system, while an incremental model of BESS is applied in this paper. As continuous disturbances are not the main concern in this paper, df/dt is not considered in article.
Resumo:
Low speed rotating machines which are the most critical components in drive train of wind turbines are often menaced by several technical and environmental defects. These factors contribute to mount the economic requirement for Health Monitoring and Condition Monitoring of the systems. When a defect is happened in such system result in reduced energy loss rates from related process and due to it Condition Monitoring techniques that detecting energy loss are very difficult if not possible to use. However, in the case of Acoustic Emission (AE) technique this issue is partly overcome and is well suited for detecting very small energy release rates. Acoustic Emission (AE) as a technique is more than 50 years old and in this new technology the sounds associated with the failure of materials were detected. Acoustic wave is a non-stationary signal which can discover elastic stress waves in a failure component, capable of online monitoring, and is very sensitive to the fault diagnosis. In this paper the history and background of discovering and developing AE is discussed, different ages of developing AE which include Age of Enlightenment (1950-1967), Golden Age of AE (1967-1980), Period of Transition (1980-Present). In the next section the application of AE condition monitoring in machinery process and various systems that applied AE technique in their health monitoring is discussed. In the end an experimental result is proposed by QUT test rig which an outer race bearing fault was simulated to depict the sensitivity of AE for detecting incipient faults in low speed high frequency machine.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
This project develops and evaluates a model of curriculum design that aims to assist student learning of foundational disciplinary ‘Threshold Concepts’. The project uses phenomenographic action research, cross-institutional peer collaboration and the Variation Theory of Learning to develop and trial the model. Two contrasting disciplines (Physics and Law) and four institutions (two research-intensive and two universities of technology) were involved in the project, to ensure broad applicability of the model across different disciplines and contexts. The Threshold Concepts that were selected for curriculum design attention were measurement uncertainty in Physics and legal reasoning in Law. Threshold Concepts are key disciplinary concepts that are inherently troublesome, transformative and integrative in nature. Once understood, such concepts transform students’ views of the discipline because they enable students to coherently integrate what were previously seen as unrelated aspects of the subject, providing new ways of thinking about it (Meyer & Land 2003, 2005, 2006; Land et al. 2008). However, the integrative and transformative nature of such threshold concepts make them inherently difficult for students to learn, with resulting misunderstandings of concepts being prevalent...
Resumo:
Zero valent iron (ZVI) was prepared by reducing natural goethite (NG-ZVI) and synthetic goethite (SG-ZVI) in hydrogen at 550 °C. XRD, TEM, FESEM/EDS and specific surface area (SSA) and pore analyser were used to characterize goethites and reduced goethites. Both NG-ZVI and SG-ZVI with a size of nanoscale to several hundreds of nanometers were obtained by reducing goethites at 550 °C. The reductive capacity of the ZVIs was assessed by removal of Cr(VI) at ambient temperature in comparison with that of commercial iron powder (CIP). The effect of contact time, initial concentration and reaction temperature on Cr(VI) removal was investigated. Furthermore, the uptake mechanism was discussed according to isotherms, thermodynamic analysis and the results of XPS. The results showed that SG-ZVI had the best reductive capacity to Cr(VI) and reduced Cr(VI) to Cr(III). The results suggest that hydrogen reduction is a good approach to prepare ZVI and this type of ZVI is potentially useful in remediating heavy metals as a material of permeable reaction barrier.