939 resultados para Continuously Stirred Bioreactor
Resumo:
AIM This paper presents a discussion on the application of a capability framework for advanced practice nursing standards/competencies. BACKGROUND There is acceptance that competencies are useful and necessary for definition and education of practice-based professions. Competencies have been described as appropriate for practice in stable environments with familiar problems. Increasingly competencies are being designed for use in the health sector for advanced practice such as the nurse practitioner role. Nurse practitioners work in environments and roles that are dynamic and unpredictable necessitating attributes and skills to practice at advanced and extended levels in both familiar and unfamiliar clinical situations. Capability has been described as the combination of skills, knowledge, values and self-esteem which enables individuals to manage change, be flexible and move beyond competency. DESIGN A discussion paper exploring 'capability' as a framework for advanced nursing practice standards. DATA SOURCES Data were sourced from electronic databases as described in the background section. IMPLICATIONS FOR NURSING As advanced practice nursing becomes more established and formalized, novel ways of teaching and assessing the practice of experienced clinicians beyond competency are imperative for the changing context of health services. CONCLUSION Leading researchers into capability in health care state that traditional education and training in health disciplines concentrates mainly on developing competence. To ensure that healthcare delivery keeps pace with increasing demand and a continuously changing context there is a need to embrace capability as a framework for advanced practice and education.
Resumo:
In an estuary, mixing and dispersion are the result of the combination of large scale advection and small scale turbulence which are both complex to estimate. A field study was conducted in a small sub-tropical estuary in which high frequency (50 Hz) turbulent data were recorded continuously for about 48 hours. A triple decomposition technique was introduced to isolate the contributions of tides, resonance and turbulence in the flow field. A striking feature of the data set was the slow fluctuations which exhibited large amplitudes up to 50% the tidal amplitude under neap tide conditions. The triple decomposition technique allowed a characterisation of broader temporal scales of high frequency fluctuation data sampled during a number of full tidal cycles.
Resumo:
A hippocampal-CA3 memory model was constructed with PGENESIS, a recently developed version of GENESIS that allows for distributed processing of a neural network simulation. A number of neural models of the human memory system have identified the CA3 region of the hippocampus as storing the declarative memory trace. However, computational models designed to assess the viability of the putative mechanisms of storage and retrieval have generally been too abstract to allow comparison with empirical data. Recent experimental evidence has shown that selective knock-out of NMDA receptors in the CA1 of mice leads to reduced stability of firing specificity in place cells. Here a similar reduction of stability of input specificity is demonstrated in a biologically plausible neural network model of the CA3 region, under conditions of Hebbian synaptic plasticity versus an absence of plasticity. The CA3 region is also commonly associated with seizure activity. Further simulations of the same model tested the response to continuously repeating versus randomized nonrepeating input patterns. Each paradigm delivered input of equal intensity and duration. Non-repeating input patterns elicited a greater pyramidal cell spike count. This suggests that repetitive versus non-repeating neocortical inpus has a quantitatively different effect on the hippocampus. This may be relevant to the production of independent epileptogenic zones and the process of encoding new memories.
Resumo:
As fossil fuel prices increase and environmental concerns gain prominence, the development of alternative fuels from biomass has become more important. Biodiesel produced from microalgae is becoming an attractive alternative to share the role of petroleum. Currently it appears that the production of microalgal biodiesel is not economically viable in current environment because it costs more than conventional fuels. Therefore, a new concept is introduced in this article as an option to reduce the total production cost of microalgal biodiesel. The integration of biodiesel production system with methane production via anaerobic digestion is proved in improving the economics and sustainability of overall biodiesel stages. Anaerobic digestion of microalgae produces methane and further be converted to generate electricity. The generated electricity can surrogate the consumption of energy that require in microalgal cultivation, dewatering, extraction and transesterification process. From theoretical calculations, the electricity generated from methane is able to power all of the biodiesel production stages and will substantially reduce the cost of biodiesel production (33% reduction). The carbon emissions of biodiesel production systems are also reduced by approximately 75% when utilizing biogas electricity compared to when the electricity is otherwise purchased from the Victorian grid. The overall findings from this study indicate that the approach of digesting microalgal waste to produce biogas will make the production of biodiesel from algae more viable by reducing the overall cost of production per unit of biodiesel and hence enable biodiesel to be more competitive with existing fuels.
Resumo:
Due to their unobtrusive nature, vision-based approaches to tracking sports players have been preferred over wearable sensors as they do not require the players to be instrumented for each match. Unfortunately however, due to the heavy occlusion between players, variation in resolution and pose, in addition to fluctuating illumination conditions, tracking players continuously is still an unsolved vision problem. For tasks like clustering and retrieval, having noisy data (i.e. missing and false player detections) is problematic as it generates discontinuities in the input data stream. One method of circumventing this issue is to use an occupancy map, where the field is discretised into a series of zones and a count of player detections in each zone is obtained. A series of frames can then be concatenated to represent a set-play or example of team behaviour. A problem with this approach though is that the compressibility is low (i.e. the variability in the feature space is incredibly high). In this paper, we propose the use of a bilinear spatiotemporal basis model using a role representation to clean-up the noisy detections which operates in a low-dimensional space. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labeled data.
Resumo:
Water to air methane emissions from freshwater reservoirs can be dominated by sediment bubbling (ebullitive) events. Previous work to quantify methane bubbling from a number of Australian sub-tropical reservoirs has shown that this can contribute as much as 95% of total emissions. These bubbling events are controlled by a variety of different factors including water depth, surface and internal waves, wind seiching, atmospheric pressure changes and water levels changes. Key to quantifying the magnitude of this emission pathway is estimating both the bubbling rate as well as the areal extent of bubbling. Both bubbling rate and areal extent are seldom constant and require persistent monitoring over extended time periods before true estimates can be generated. In this paper we present a novel system for persistent monitoring of both bubbling rate and areal extent using multiple robotic surface chambers and adaptive sampling (grazing) algorithms to automate the quantification process. Individual chambers are self-propelled and guided and communicate between each other without the need for supervised control. They can maintain station at a sampling site for a desired incubation period and continuously monitor, record and report fluxes during the incubation. To exploit the methane sensor detection capabilities, the chamber can be automatically lowered to decrease the head-space and increase concentration. The grazing algorithms assign a hierarchical order to chambers within a preselected zone. Chambers then converge on the individual recording the highest 15 minute bubbling rate. Individuals maintain a specified distance apart from each other during each sampling period before all individuals are then required to move to different locations based on a sampling algorithm (systematic or adaptive) exploiting prior measurements. This system has been field tested on a large-scale subtropical reservoir, Little Nerang Dam, and over monthly timescales. Using this technique, localised bubbling zones on the water storage were found to produce over 50,000 mg m-2 d-1 and the areal extent ranged from 1.8 to 7% of the total reservoir area. The drivers behind these changes as well as lessons learnt from the system implementation are presented. This system exploits relatively cheap materials, sensing and computing and can be applied to a wide variety of aquatic and terrestrial systems.
Resumo:
Structural Health Monitoring (SHM) schemes are useful for proper management of the performance of structures and for preventing their catastrophic failures. Vibration based SHM schemes has gained popularity during the past two decades resulting in significant research. It is hence evitable that future SHM schemes will include robust and automated vibration based damage assessment techniques (VBDAT) to detect, localize and quantify damage. In this context, the Damage Index (DI) method which is classified as non-model or output based VBDAT, has the ability to automate the damage assessment process without using a computer or numerical model along with actual measurements. Although damage assessment using DI methods have been able to achieve reasonable success for structures made of homogeneous materials such as steel, the same success level has not been reported with respect to Reinforced Concrete (RC) structures. The complexity of flexural cracks is claimed to be the main reason to hinder the applicability of existing DI methods in RC structures. Past research also indicates that use of a constant baseline throughout the damage assessment process undermines the potential of the Modal Strain Energy based Damage Index (MSEDI). To address this situation, this paper presents a novel method that has been developed as part of a comprehensive research project carried out at Queensland University of Technology, Brisbane, Australia. This novel process, referred to as the baseline updating method, continuously updates the baseline and systematically tracks both crack formation and propagation with the ability to automate the damage assessment process using output only data. The proposed method is illustrated through examples and the results demonstrate the capability of the method to achieve the desired outcomes.
Resumo:
Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.
Resumo:
Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.
Resumo:
Also physical exercise in general is accepted to be protective, acute and strenuous exercise has been shown to induce oxidative stress. Enhanced formation of free radicals leads to oxidation of macromolecules and to DNA damage. On the other hand ultra-endurance events which require strenuous exercise are very popular and the number of participants is continuously increasing worldwide. Since only few data exists on Ironman triathletes, who are prototypes of ultra-endurance athletes, this study was aimed at assessing the risk of oxidative stress and DNA damage after finishing a triathlon and to predict a possible health risk. Blood samples of 42 male athletes were taken 2 days before, within 20 min after the race, 1, 5 and 19 days post-race. Oxidative stress marker increased only moderately after the race and returned to baseline after 5 days. Marker of DNA damage measured by the SCGE assay with and without restriction enzymes as well as by the sister chromatid exchange assay did either show no change or deceased within the first day after the race. Due to intake during the race and the release by the cells plasma concentrations of vitamin C and α-tocopherol increased after the event and returned to baseline 1 day after. This study indicates that despite a temporary increase in some oxidative stress markers, there is no persistent oxidative stress and no DNA damage in response to an Ironman triathlon in trained athletes, mainly due to an appropriate antioxidant intake and general protective alterations in the antioxidant defence system.
Resumo:
In The Climate Change Review, Ross Garnaut emphasised that ‘Climate change and climate change mitigation will bring about major structural change in the agriculture, forestry and other land use sectors’. He provides this overview of the effects of climate change on food demand and supply: ‘Domestic food production in many developing countries will be at immediate risk of reductions in agricultural productivity due to crop failure, livestock loss, severe weather events and new patterns of pests and diseases.’ He observes that ‘Changes to local climate and water availability will be key determinants of where agricultural production occurs and what is produced.’ Gert Würtenberger has commented that modern plant breeding is particularly concerned with addressing larger issues about nutrition, food security and climate change: ‘Modern plant breeding has an increasing importance with regard to the continuously growing demand for plants for nutritional and feeding purposes as well as with regard to renewal energy sources and the challenges caused by climate changes.’ Moreover, he notes that there is a wide array of scientific and technological means of breeding new plant varieties: ‘Apart from classical breeding, technologies have an important role in the development of plants that satisfy the various requirements that industrial and agricultural challenges expect to be fulfilled.’ He comments: ‘Plant variety rights, as well as patents which protect such results, are of increasingly high importance to the breeders and enterprises involved in plant development programmes.’ There has been larger interest in the intersections between sustainable agriculture, environmental protection and food security. The debate over agricultural intellectual property is a polarised one, particularly between plant breeders, agricultural biotechnology companies and a range of environmentalist groups. Susan Sell comments that there are complex intellectual property battles surrounding agriculture: 'Seeds are at the centre of a complex political dynamic between stakeholders. Access to seeds concerns the balance between private rights and public obligations, private ownership and the public domain, and commercial versus humanitarian objectives.' Part I of this chapter considers debates in respect of plant breeders’ rights, food security and climate change in relation to the UPOV Convention 1991. Part II explores efforts by agricultural biotechnology companies to patent climate-ready crops. Part III considers the report of the Special Rapporteur for Food, Olivier De Schutter. It looks at a variety of options to encourage access to plant varieties with climate adaptive or mitigating properties.
Resumo:
In the development of technological systems the focus of system analysis is often on the sub-system that delivers insufficient performance – i.e. the reverse salient – which subsequently limits the performance of the system in its entirety. The reverse salient is therefore a useful concept in the study of technological systems and while the literature holds numerous accounts of its use, it is not known how often, in which streams of literature, and in what type of application the concept has been utilized by scholars since its introduction by Thomas Hughes in 1983. In this paper we employ bibliometric citation analysis between 1983 and 2008, inclusively, to study the impact of the reverse salient concept in the literature at large as well as study the dissemination of the concept into different fields of research. The study results show continuously growing number of concept citations in the literature over time as well as increasing concept diffusion into different research areas. The analysis of article contents additionally suggests the opportunity for scholars to engage in deeper conceptual application. Finally, the continuing increase in the number of citations highlights the importance of the reverse salient concept to scholars and practitioners.
Resumo:
The evolution of technological systems is hindered by systemic components, referred to as reverse salients, which fail to deliver the necessary level of technological performance thereby inhibiting the performance delivery of the system as a whole. This paper develops a performance gap measure of reverse salience and applies this measurement in the study of the PC (personal computer) technological system, focusing on the evolutions of firstly the CPU (central processing unit) and PC game sub-systems, and secondly the GPU (graphics processing unit) and PC game sub-systems. The measurement of the temporal behavior of reverse salience indicates that the PC game sub-system is the reverse salient, continuously trailing behind the technological performance of the CPU and GPU sub-systems from 1996 through 2006. The technological performance of the PC game sub-system as a reverse salient trails that of the CPU sub-system by up to 2300 MHz with a gradually decreasing performance disparity in recent years. In contrast, the dynamics of the PC game sub-system as a reverse salient trails the GPU sub-system with an ever increasing performance gap throughout the timeframe of analysis. In addition, we further discuss the research and managerial implications of our findings.
Resumo:
This work describes the development of a model of cerebral atrophic changes associated with the progression of Alzheimer's disease (AD). Linear registration, region-of-interest analysis, and voxel-based morphometry methods have all been employed to elucidate the changes observed at discrete intervals during a disease process. In addition to describing the nature of the changes, modeling disease-related changes via deformations can also provide information on temporal characteristics. In order to continuously model changes associated with AD, deformation maps from 21 patients were averaged across a novel z-score disease progression dimension based on Mini Mental State Examination (MMSE) scores. The resulting deformation maps are presented via three metrics: local volume loss (atrophy), volume (CSF) increase, and translation (interpreted as representing collapse of cortical structures). Inspection of the maps revealed significant perturbations in the deformation fields corresponding to the entorhinal cortex (EC) and hippocampus, orbitofrontal and parietal cortex, and regions surrounding the sulci and ventricular spaces, with earlier changes predominantly lateralized to the left hemisphere. These changes are consistent with results from post-mortem studies of AD.
Resumo:
We present a shape-space approach for analyzing genetic influences on the shapes of the sulcal folding patterns on the cortex. Sulci are represented as continuously parameterized functions in a shape space, and shape differences between sulci are obtained via geodesics between them. The resulting statistical shape analysis framework is used not only to construct populations averages, but also used to compute meaningful correlations within and across groups of sulcal shapes. More importantly, we present a new algorithm that extends the traditional Euclidean estimate of the intra-class correlation to the geometric shape space, thereby allowing us to study heritability of sulcal shape traits for a population of 193 twin pairs. This new methodology reveals strong genetic influences on the sulcal geometry of the cortex.