944 resultados para Lab
Resumo:
Biofuel produced by fast pyrolysis from biomass is a promising candidate. The heart of the system is a reactor which is directly or indirectly heated to approximately 500°C by exhaust gases from a combustor that burns pyrolysis gas and some of the by-product char. In most of the cases, external biomass heater is used as heating source of the system while internal electrical heating is recently implemented as source of reactor heating. However, this heating system causes biomass or other conventional forms of fuel consumption to produce renewable energy and contributes to environmental pollution. In order to overcome these, the feasibility of incorporating solar energy with fast pyrolysis has been investigated. The main advantages of solar reactor heating include renewable source of energy, comparatively simpler devices, and no environmental pollution. A lab scale pyrolysis setup has been examined along with 1.2 m diameter parabolic reflector concentrator that provides hot exhaust gas up to 162°C. The study shows that about 32.4% carbon dioxide (CO2) emissions and almost one-third portion of fuel cost are reduced by incorporating solar heating system. Successful implementation of this proposed solar assisted pyrolysis would open a prospective window of renewable energy.
Resumo:
Protein adsorption at solid-liquid interfaces is critical to many applications, including biomaterials, protein microarrays and lab-on-a-chip devices. Despite this general interest, and a large amount of research in the last half a century, protein adsorption cannot be predicted with an engineering level, design-orientated accuracy. Here we describe a Biomolecular Adsorption Database (BAD), freely available online, which archives the published protein adsorption data. Piecewise linear regression with breakpoint applied to the data in the BAD suggests that the input variables to protein adsorption, i.e., protein concentration in solution; protein descriptors derived from primary structure (number of residues, global protein hydrophobicity and range of amino acid hydrophobicity, isoelectric point); surface descriptors (contact angle); and fluid environment descriptors (pH, ionic strength), correlate well with the output variable-the protein concentration on the surface. Furthermore, neural network analysis revealed that the size of the BAD makes it sufficiently representative, with a neural network-based predictive error of 5% or less. Interestingly, a consistently better fit is obtained if the BAD is divided in two separate sub-sets representing protein adsorption on hydrophilic and hydrophobic surfaces, respectively. Based on these findings, selected entries from the BAD have been used to construct neural network-based estimation routines, which predict the amount of adsorbed protein, the thickness of the adsorbed layer and the surface tension of the protein-covered surface. While the BAD is of general interest, the prediction of the thickness and the surface tension of the protein-covered layers are of particular relevance to the design of microfluidics devices.
Resumo:
Current developments in gene medicine and vaccination studies are utilizing plasmid DNA (pDNA) as the vector. For this reason, there has been an increasing trend towards larger and larger doses of pDNA utilized in human trials: from 100-1000 μg in 2002 to 500-5000 μg in 2005. The increasing demand of pDNA has created the need to revolutionalize current production levels under optimum economy. In this work, different standard media (LB, TB and SOC) for culturing recombinant Escherichia coli DH5α harbouring pUC19 were compared to a medium optimised for pDNA production. Lab scale fermentations using the standard media showed that the highest pDNA volumetric and specific yields were for TB (11.4 μg/ml and 6.3 μg/mg dry cell mass respectively) and the lowest was for LB (2.8 μg/ml and 3.3 μg/mg dry cell mass respectively). A fourth medium, PDMR, designed by modifying a stoichiometrically-formulated medium with an optimised carbon source concentration and carbon to nitrogen ratio displayed pDNA volumetric and specific yields of 23.8 μg/ml and 11.2 μg/mg dry cell mass respectively. However, it is the economic advantages of the optimised medium that makes it so attractive. Keeping all variables constant except medium and using LB as a base scenario (100 medium cost [MC] units/mg pDNA), the optimised PDMR medium yielded pDNA at a cost of only 27 MC units/mg pDNA. These results show that greater amounts of pDNA can be obtained more economically with minimal extra effort simply by using a medium optimised for pDNA production.
Resumo:
The maturing of the biotechnology industry and a focus on productivity has seen a shift from discovery science to small-scale bench-top research to higher productivity, large scale production. Health companies are aggressively expanding their biopharmaceutical interests, an expansion which is facilitated by biochemical and bioprocess engineering. An area of continuous growth is vaccines. Vaccination will be a key intervention in the case of an influenza pandemic. The global manufacturing capacity for fast turn around vaccines is currently woefully inadequate at around 300 million shots. As the prevention of epidemics requires > 80 % vaccination, in theory the world should currently be aiming for the ability to produce around 5.3 billion vaccines. Presented is a production method for the creation of a fast turn around DNA vaccine. A DNA vaccine could have a production time scale of as little as two weeks. This process has been harnessed into a pilot scale production system for the creation of a pre-clinical grade malaria vaccine in a collaborative project with the Coppel Lab, Department of Microbiology, Monash University. In particular, improvements to the fermentation, chromatography and delivery stages will be discussed. Consideration will then be given as to how the fermentation stage affects the mid and downstream processing stages.
Resumo:
In order to protect our planet and ourselves from the adverse effects of excessive CO2 emissions and to prevent an imminent non-renewable fossil fuel shortage and energy crisis, there is a need to transform our current ‘fossil fuel dependent’ energy systems to new, clean, renewable energy sources. The world has recognized hydrogen as an energy carrier that complies with all the environmental quality and energy security, demands. This research aimed at producing hydrogen through anaerobic fermentation, using food waste as the substrate. Four food waste substrates were used: Rice, fish, vegetable and their mixture. Bio-hydrogen production was performed in lab scale reactors, using 250 mL serum bottles. The food waste was first mixed with the anaerobic sewage sludge and incubated at 37°C for 31 days (acclimatization). The anaerobic sewage sludge was then heat treated at 80°C for 15 min. The experiment was conducted at an initial pH of 5.5 and temperatures of 27, 35 and 55°C. The maximum cumulative hydrogen produced by rice, fish, vegetable and mixed food waste substrates were highest at 37°C (Rice =26.97±0.76 mL, fish = 89.70±1.25 mL, vegetable = 42.00±1.76 mL, mixed = 108.90±1.42 mL). A comparative study of acclimatized (the different food waste substrates were mixed with anaerobic sewage sludge and incubated at 37°C for 31days) and non-acclimatized food waste substrate (food waste that was not incubated with anaerobic sewage sludge) showed that acclimatized food waste substrate enhanced bio-hydrogen production by 90 - 100%.
Resumo:
A 3hr large scale participatory installation/event that included live performance, video works,objects, fabric sculptures and was the result of a three month artist residency undertaken by Cam Lab (Jemima Wyman and Anna Mayer)at the Museum of Contemporary Art Los Angeles California. The exhibition transformed two adjoining spaces in the museum, taking design cues from permanent collection artworks currently on view and encouraged gallery visitors to oscillate between immersion and agency as they occupy the various perspectives proposed by the installation.
Resumo:
Final report for the Australian Government Office for Learning and Teaching. "This seed project ‘Design thinking frameworks as transformative cross-disciplinary pedagogy’ aimed to examine the way design thinking strategies are used across disciplines to scaffold the development of student attributes in the domain of problem solving and creativity in order to enhance the nation’s capacity for innovation. Generic graduate attributes associated with innovation, creativity and problem solving are considered to be amongst the most important of all targeted attributes (Bradley Review of Higher Education, 2009). The project also aimed to gather data on how academics across disciplines conceptualised design thinking methodologies and strategies. Insights into how design thinking strategies could be embedded at the subject level to improve student outcomes were of particular interest in this regard. A related aim was the investigation of how design thinking strategies could be used by academics when designing new and innovative subjects and courses." Case Study 3: QUT Community Engaged Learning Lab Design Thinking/Design Led Innovation Workshop by Natalie Wright Context "The author, from the discipline area of Interior Design in the QUT School of Design, Faculty of Creative Industries, is a contributing academic and tutor for The Community Engaged Learning Lab, which was initiated at Queensland University of Technology in 2012. The Lab facilitates university-wide service-learning experiences and engages students, academics, and key community organisations in interdisciplinary action research projects to support student learning and to explore complex and ongoing problems nominated by the community partners. In Week 3, Semester One 2013, with the assistance of co-lead Dr Cara Wrigley, Senior Lecturer in Design led Innovation, a Masters of Architecture research student and nine participating industry-embedded Masters of Research (Design led Innovation) facilitators, a Design Thinking/Design led Innovation workshop was conducted for the Community Engaged Learning Lab students, and action research outcomes published at 2013 Tsinghua International Design Management Symposium, December 2013 in Shenzhen, China (Morehen, Wright, & Wrigley, 2013)."
Resumo:
Twenty first century society presents critical challenges for higher education (Brew 2013, 2). The challenges facing modern communities require graduates to have skills that respond to issues at the boundaries of, and intersections between, disciplines. Mounting evidence suggests that interdisciplinary curriculum and pedagogies help students to develop boundary-crossing skills and a deeper awareness of the student’s domain-specific knowledge (Spelt et al. 2009; Strober 2011). Spelt et al. (2009) describe boundary-crossing skills as the ability to engage with different discourses, take account of multiple perspectives, synthesise knowledge of different disciplines, and cope with complexity. In this chapter we investigate emerging conditions, practical processes, and pedagogical strategies that are enabling the Lab stakeholders, the community, the university, and students to participate in interdisciplinary community-engaged learning. Aspects of the Lab that are considered in this chapter include building trust, sharing values, establishing learning goals that are reflected in learning experiences and assessment, and employing strategies that define and attend to relationships and roles. The case study, “The Recognition of Aboriginal and Torres Strait Islander Peoples in the Australian Constitution”, a QUT collaborative project with the Social Justice Research Unit Anglicare Southern Queensland, describes the collaborators, processes, outcomes, and the lessons learned through one Lab project over three semesters. The issues illustrated in the case study are then further explored in a critical discussion of the strategies supporting interdisciplinarity in community-engaged learning across university/community collaboration, within and across the university, and for student participants
Resumo:
We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.
Practice-based learning in community contexts: A collaborative exploration of pedagogical principles
Resumo:
The primary focus of this chapter is an exploration of four pedagogical principles emerging from a practice-based learning lab. Following an overview of community engaged learning and the Lab approach, the chapter is structured around a discussion of pedagogical principles related to (1) collaboration, (2) interdisciplinarity, (3) complexity and uncertainty and (4) reflection. Through a participatory action research (PAR) framework, students, academics and community partners have worked to identify and refine what it takes to support students negotiate complexity and uncertainty inherent in problems facing communities. It also examines the pedagogical strategies employed to facilitate collaboration across disciplines and professional contexts in ways that leverage difference and challenge values and practices.
Resumo:
Typically, the walking ability of individuals with a transfemoral amputation (TFA) can be represented by the speed of walking (SofW) obtained in experimental settings. Recent developments in portable kinetic systems allow assessing the level of activity of TFA during actual daily living outside the confined space of a gait lab. Unfortunately, only minimal spatio-temporal characteristics could be extracted from the kinetic data including the cadence and the duration on gait cycles. Therefore, there is a need for a way to use some of these characteristics to assess the instantaneous speed of walking during daily living. The purpose of the study was to compare several methods to determine SofW using minimal spatial gait characteristics.
Resumo:
The mining industry presents us with a number of ideal applications for sensor based machine control because of the unstructured environment that exists within each mine. The aim of the research presented here is to increase the productivity of existing large compliant mining machines by retrofitting with enhanced sensing and control technology. The current research focusses on the automatic control of the swing motion cycle of a dragline and an automated roof bolting system. We have achieved: * closed-loop swing control of an one-tenth scale model dragline; * single degree of freedom closed-loop visual control of an electro-hydraulic manipulator in the lab developed from standard components.
Resumo:
Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.
Resumo:
In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.
Resumo:
There is strong evidence to suggest that the combination of alcohol and chronic repetitive stress leads to long-lasting effects on brain function, specifically areas associated with stress, motivation and decision-making such as the amygdala, nucleus accumbens and prefrontal cortex. Alcohol and stress together facilitate the imprinting of long-lasting memories. The molecular mechanisms and circuits involved are being studied but are not fully understood. Current evidence suggests that corticosterone (animals) or cortisol (humans), in addition to direct transcriptional effects on the genome, can directly regulate pre- and postsynaptic synaptic transmission through membrane bound glucocorticoid receptors (GR). Indeed, corticosterone-sensitive synaptic receptors may be critical sites for stress regulation of synaptic responses. Direct modulation of synaptic transmission by corticosterone may contribute to the regulation of synaptic plasticity and memory during stress (Johnson et al., 2005; Prager et al., 2010). Specifically, previous data has shown that long term alcohol (1) increases the expression of NR2Bcontaining NMDA receptors at glutamate synapses, (2) changes receptor density, and (3) changes morphology of dendritic spines (Prendergast and Mulholland; 2012). During alcohol withdrawal these changes are associated with increased glucocorticoid signalling and increased neuronal excitability. It has therefore been proposed that these synapse changes lead to the anxiety and alcohol craving associated with withdrawal (Prendergast and Mulholland; 2012). My lab is targeting this receptor system and the amygdala in order to understand the effect of combining alcohol and stress on these pathways. Lastly, we are testing GR specific compounds as potential new medications to promote the development of resilience to developing addiction.