975 resultados para Constrained


Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIMS This paper reports on the implementation of a research project that trials an educational strategy implemented over six months of an undergraduate third year nursing curriculum. This project aims to explore the effectiveness of ‘think aloud’ as a strategy for learning clinical reasoning for students in simulated clinical settings. BACKGROUND Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting [1]. Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. For students learning to manage complex clinical situations, teaching approaches are required that make these instinctive cognitive processes explicit and clear [2-5]. In line with professional expectations, nursing students in third year at Queensland University of Technology (QUT) are expected to display clinical reasoning skills in practice. This can be a complex proposition for students in practice situations, particularly as the degree of uncertainty or decision complexity increases [6-7]. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students [4, 8]. This project aims to use the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students are assisted to uncover cognitive approaches that best assist them to make effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection on their practice. MEHODS In semester 2 2011 at QUT, third year nursing students will undertake high fidelity simulation, some for the first time commencing in September of 2011. There will be two cohorts for strategy implementation (group 1= use think aloud as a strategy within the simulation, group 2= not given a specific strategy outside of nursing assessment frameworks) in relation to problem solving patient needs. Students will be briefed about the scenario, given a nursing handover, placed into a simulation group and an observer group, and the facilitator/teacher will run the simulation from a control room, and not have contact (as a ‘teacher’) with students during the simulation. Then debriefing will occur as a whole group outside of the simulation room where the session can be reviewed on screen. The think aloud strategy will be described to students in their pre-simulation briefing and allow for clarification of this strategy at this time. All other aspects of the simulations remain the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). RESULTS Methodology of the project and the challenges of implementation will be the focus of this presentation. This will include ethical considerations in designing the project, recruitment of students and implementation of a voluntary research project within a busy educational curriculum which in third year targets 669 students over two campuses. CONCLUSIONS In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs. References 1. Lasater, K., High-fidelity simulation and the development of clinical judgement: students' experiences. Journal of Nursing Education, 2007. 46(6): p. 269-276. 2. Lapkin, S., et al., Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clinical Simulation in Nursing, 2010. 6(6): p. e207-22. 3. Kaddoura, M.P.C.M.S.N.R.N., New Graduate Nurses' Perceptions of the Effects of Clinical Simulation on Their Critical Thinking, Learning, and Confidence. The Journal of Continuing Education in Nursing, 2010. 41(11): p. 506. 4. Banning, M., The think aloud approach as an educational tool to develop and assess clinical reasoning in undergraduate students. Nurse Education Today, 2008. 28: p. 8-14. 5. Porter-O'Grady, T., Profound change:21st century nursing. Nursing Outlook, 2001. 49(4): p. 182-186. 6. Andersson, A.K., M. Omberg, and M. Svedlund, Triage in the emergency department-a qualitative study of the factors which nurses consider when making decisions. Nursing in Critical Care, 2006. 11(3): p. 136-145. 7. O'Neill, E.S., N.M. Dluhy, and C. Chin, Modelling novice clinical reasoning for a computerized decision support system. Journal of Advanced Nursing, 2005. 49(1): p. 68-77. 8. Lee, J.E. and N. Ryan-Wenger, The "Think Aloud" seminar for teaching clinical reasoning: a case study of a child with pharyngitis. J Pediatr Health Care, 1997. 11(3): p. 101-10.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximately 20 years have passed now since the NTSB issued its original recommendation to expedite development, certification and production of low-cost proximity warning and conflict detection systems for general aviation [1]. While some systems are in place (TCAS [2]), ¡¨see-and-avoid¡¨ remains the primary means of separation between light aircrafts sharing the national airspace. The requirement for a collision avoidance or sense-and-avoid capability onboard unmanned aircraft has been identified by leading government, industry and regulatory bodies as one of the most significant challenges facing the routine operation of unmanned aerial systems (UAS) in the national airspace system (NAS) [3, 4]. In this thesis, we propose and develop a novel image-based collision avoidance system to detect and avoid an upcoming conflict scenario (with an intruder) without first estimating or filtering range. The proposed collision avoidance system (CAS) uses relative bearing ƒÛ and angular-area subtended ƒê , estimated from an image, to form a test statistic AS C . This test statistic is used in a thresholding technique to decide if a conflict scenario is imminent. If deemed necessary, the system will command the aircraft to perform a manoeuvre based on ƒÛ and constrained by the CAS sensor field-of-view. Through the use of a simulation environment where the UAS is mathematically modelled and a flight controller developed, we show that using Monte Carlo simulations a probability of a Mid Air Collision (MAC) MAC RR or a Near Mid Air Collision (NMAC) RiskRatio can be estimated. We also show the performance gain this system has over a simplified version (bearings-only ƒÛ ). This performance gain is demonstrated in the form of a standard operating characteristic curve. Finally, it is shown that the proposed CAS performs at a level comparable to current manned aviations equivalent level of safety (ELOS) expectations for Class E airspace. In some cases, the CAS may be oversensitive in manoeuvring the owncraft when not necessary, but this constitutes a more conservative and therefore safer, flying procedures in most instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mandatory data breach notification laws are a novel and potentially important legal instrument regarding organisational protection of personal information. These laws require organisations that have suffered a data breach involving personal information to notify those persons that may be affected, and potentially government authorities, about the breach. The Australian Law Reform Commission (ALRC) has proposed the creation of a mandatory data breach notification scheme, implemented via amendments to the Privacy Act 1988 (Cth). However, the conceptual differences between data breach notification law and information privacy law are such that it is questionable whether a data breach notification scheme can be solely implemented via an information privacy law. Accordingly, this thesis by publications investigated, through six journal articles, the extent to which data breach notification law was conceptually and operationally compatible with information privacy law. The assessment of compatibility began with the identification of key issues related to data breach notification law. The first article, Stakeholder Perspectives Regarding the Mandatory Notification of Australian Data Breaches started this stage of the research which concluded in the second article, The Mandatory Notification of Data Breaches: Issues Arising for Australian and EU Legal Developments (‘Mandatory Notification‘). A key issue that emerged was whether data breach notification was itself an information privacy issue. This notion guided the remaining research and focused attention towards the next stage of research, an examination of the conceptual and operational foundations of both laws. The second article, Mandatory Notification and the third article, Encryption Safe Harbours and Data Breach Notification Laws did so from the perspective of data breach notification law. The fourth article, The Conceptual Basis of Personal Information in Australian Privacy Law and the fifth article, Privacy Invasive Geo-Mashups: Privacy 2.0 and the Limits of First Generation Information Privacy Laws did so for information privacy law. The final article, Contextualizing the Tensions and Weaknesses of Information Privacy and Data Breach Notification Laws synthesised previous research findings within the framework of contextualisation, principally developed by Nissenbaum. The examination of conceptual and operational foundations revealed tensions between both laws and shared weaknesses within both laws. First, the distinction between sectoral and comprehensive information privacy legal regimes was important as it shaped the development of US data breach notification laws and their subsequent implementable scope in other jurisdictions. Second, the sectoral versus comprehensive distinction produced different emphases in relation to data breach notification thus leading to different forms of remedy. The prime example is the distinction between market-based initiatives found in US data breach notification laws compared to rights-based protections found in the EU and Australia. Third, both laws are predicated on the regulation of personal information exchange processes even though both laws regulate this process from different perspectives, namely, a context independent or context dependent approach. Fourth, both laws have limited notions of harm that is further constrained by restrictive accountability frameworks. The findings of the research suggest that data breach notification is more compatible with information privacy law in some respects than others. Apparent compatibilities clearly exist as both laws have an interest in the protection of personal information. However, this thesis revealed that ostensible similarities are founded on some significant differences. Data breach notification law is either a comprehensive facet to a sectoral approach or a sectoral adjunct to a comprehensive regime. However, whilst there are fundamental differences between both laws they are not so great to make them incompatible with each other. The similarities between both laws are sufficient to forge compatibilities but it is likely that the distinctions between them will produce anomalies particularly if both laws are applied from a perspective that negates contextualisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While it is generally accepted in the learning and teaching literature that assessment is the single biggest influence on how students approach their learning, assessment methods within higher education are generally conservative and inflexible. Constrained by policy and accreditation requirements and the need for the explicit articulation of assessment standards for public accountability purposes, assessment tasks can fail to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design can simultaneously deliver program objectives and active learning through a knowledge transfer process which increases student participation. This social constructivist view highlights that acquiring an understanding of assessment processes, criteria and standards needs active student participation. Within this context, a peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes of this assessment innovation was that student failure rates declined 15%, tutorial participation increased fourfold, tutorial engagement increased six-fold and there was a 100% approval rating for the retention of the assessment task. In contributing to the core conference theme of “seismic” shifts within higher education, in stark contrast to the positive student response, staff-related issues of assessment conservatism and the necessity of meeting increasing research commitments, threatened the assessment task’s survival. These opposing forces to change have the potential to weaken the ability of higher education assessment arrangements to adequately serve either a new generation of students or the sector's community stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While it is generally accepted in the learning and teaching literature that assessment is the single biggest influence on how students approach their learning, assessment methods within higher education are generally conservative and inflexible. Constrained by policy and accreditation requirements and the need for the explicit articulation of assessment standards for public accountability purposes, assessment tasks can fail to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design can simultaneously deliver program objectives and active learning through a knowledge transfer process which increases student participation. This social constructivist view highlights that acquiring an understanding of assessment processes, criteria and standards needs active student participation. Within this context, a peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes of this assessment innovation was that student failure rates declined 15%, tutorial participation increased fourfold, tutorial engagement increased six-fold and there was a 100% approval rating for the retention of the assessment task. In contributing to the core conference theme of “seismic” shifts within higher education, in stark contrast to the positive student response, staff-related issues of assessment conservatism and the necessity of meeting increasing research commitments, threatened the assessment task’s survival. These opposing forces to change have the potential to weaken the ability of higher education assessment arrangements to adequately serve either a new generation of students or the sector's community stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tabernacle is an experimental game world-building project which explores the relationship between the map and the 3-dimensional visualisation enabled by high-end game engines. The project is named after the 6th century tabernacle maps of Cosmas Indicopleustes in his Christian Topography. These maps articulate a cultural or metaphoric, rather than measured view of the world, contravening Alper's distinction which observes that “maps are measurement, art is experience”. The project builds on previous research into the use of game engines and 3D navigable representation to enable cultural experience, particularly non-Western cultural experiences and ways of seeing. Like the earlier research, Tabernacle highlights the problematic disjuncture between the modern Cartesian map structures of the engine and the mapping traditions of non-Western cultures. Tabernacle represents a practice-based research provocation. The project exposes assumptions about the maps which underpin 3D game worlds, and the autocratic tendencies of world construction software. This research is of critical importance as game engines and simulation technologies are becoming more popular in the recreation of culture and history. A key learning from the Tabernacle project was the ways in which available game engines – technologies with roots in the Enlightenment - constrained the team’s ability to represent a very different culture with a different conceptualisation of space and maps. Understanding the cultural legacies of the software itself is critical as we are tempted by the opportunities for representation of culture and history that they seem to offer. The project was presented at Perth Digital Arts and Culture in 2007 and reiterated using a different game engine in 2009. Further reflections were discussed in a conference paper presented at OZCHI 2009 and a peer-reviewed journal article, and insights gained from the experience continue to inform the author’s research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subtropical south-east Queensland’s expanding population is expected to lead to a demand for an additional 754,000 dwellings by 2031. A legacy of poor housing design, minimal building regulations, an absence of building performance evaluation and various social and market factors has lead to a high and growing penetration of, and reliance on, air conditioners to provide comfort in this relatively benign climate. This reliance impacts on policy goals to adapt to and mitigate against global warming, electricity infrastructure investment and household resilience. Based on the concept of bioclimatic design, this field study scrutinizes eight non-air conditioned homes to develop a deeper understanding of the role of contemporary passive solar architecture in the delivery of thermally comfortable and resilient homes in the subtropics. These homes were found to provide inhabitants with an acceptable level of thermal comfort (18-28oC) for 77 – 97% of the year. Family expectations and experiences of comfort, and the various design strategies utilized were compared against the measured performance outcomes. This comparison revealed issues that limited quantification and implementation of design intent and highlighted factors that constrained system optimisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance of locomotor pointing tasks (goal-directed locomotion) in sport is typically constrained by dynamic factors, such as positioning of opponents and objects for interception. In the team sport of association football, performers have to coordinate their gait with ball displacement when dribbling and when trying to prevent opponent interception when running to kick a ball. This thesis comprises two studies analysing the movement patterns during locomotor pointing of eight experienced youth football players under static and dynamic constraints by manipulating levels of ball displacement (ball stationary or moving) and defensive pressure (defenders absent, or positioned near or far during performance). ANOVA with repeated measures was used to analyse effects of these task constraints on gait parameters during the run-up and cross performance sub-phase. Experiment 1 revealed outcomes consistent with previous research on locomotor pointing. When under defensive pressure, participants performed the run-up more quickly, concurrently modifying footfall placements relative to the ball location over trials. In experiment 2 players coordinated their gait relative to a moving ball significantly differently when under defensive pressure. Despite no specific task instructions being provided beforehand, context dependent constraints interacted to influence footfall placements over trials and running velocity of participants in different conditions. Data suggest that coaches need to manipulate task constraints carefully to facilitate emergent movement behaviours during practice in team games like football.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some uncertainties such as the stochastic input/output power of a plug-in electric vehicle due to its stochastic charging and discharging schedule, that of a wind unit and that of a photovoltaic generation source, volatile fuel prices and future uncertain load growth, all together could lead to some risks in determining the optimal siting and sizing of distributed generators (DGs) in distributed systems. Given this background, under the chance constrained programming (CCP) framework, a new method is presented to handle these uncertainties in the optimal sitting and sizing problem of DGs. First, a mathematical model of CCP is developed with the minimization of DGs investment cost, operational cost and maintenance cost as well as the network loss cost as the objective, security limitations as constraints, the sitting and sizing of DGs as optimization variables. Then, a Monte Carolo simulation embedded genetic algorithm approach is developed to solve the developed CCP model. Finally, the IEEE 37-node test feeder is employed to verify the feasibility and effectiveness of the developed model and method. This work is supported by an Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) Project on Intelligent Grids Under the Energy Transformed Flagship, and Project from Jiangxi Power Company.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The participation of the community broadcasting sector in the development of digital radio provides a potentially valuable opportunity for non-market, end user-driven experimentation in the development of these new services in Australia. However this development path is constrained by various factors, some of which are specific to the community broadcasting sector and others that are generic to the broader media and communications policy, industrial and technological context. This paper filters recent developments in digital radio policy and implementation through the perspectives of community radio stakeholders, obtained through interviews, to describe and analyse these constraints. The early stage of digital community radio presented here is intended as a baseline for tracking the development of the sector as digital radio broadcasting develops. We also draw upon insights from scholarly debates about citizens media and participatory culture to identify and discuss two sets of opportunities for social benefit that are enabled by the inclusion of community radio in digital radio service development. The first arises from community broadcasting’s involvement in the propagation of the multi-literacies that drive new digital economies, not only through formal and informal multi- and trans-media training, but also in the ‘co-creative’ forms of collaborative and participatory media production that are fostered in the sector. The second arises from the fact that community radio is uniquely placed — indeed charged with the responsibility — to facilitate social participation in the design and operation of media institutions themselves, not just their service outputs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sorghum (Sorghum bicolor (L.) Moench) is the world’s fifth major cereal crop and holds importance as a construction material, food and fodder source. More recently, the potential of this plant as a biofuel source has been noted. Despite its agronomic importance, the use of sorghum production is being constrained by both biotic and abiotic factors. These challenges could be addressed by the use of genetic engineering strategies to complement conventional breeding techniques. However, sorghum is one of the most recalcitrant crops for genetic modification with the lack of an efficient tissue culture system being amongst the chief reasons. Therefore, the aim of this study was to develop an efficient tissue culture system for establishing regenerable embryogenic cell lines, micropropagation and acclimatisation for Sorghum bicolor and use this to optimise parameters for genetic transformation via Agrobacterium-mediated transformation and microprojectile bombardment. Using five different sorghum cultivars, SA281, 296B, SC49, Wray and Rio, numerous parameters were investigated in an attempt to establish an efficient and reproducible tissue culture and transformation system. Using immature embryos (IEs) as explants, regenerable embryogenic cell lines (ECLs) could only be established from cultivars SA281 and 296B. Large amounts of phenolics were produced from IEs of cultivars, SC49, Wary and Rio, and these compounds severely hindered callus formation and development. Cultivar SA281 also produced phenolics during regeneration. Attempts to suppress the production of these compounds in cultivars SA281 and SC49 using activated charcoal, PVP, ascorbic acid, citric acid and liquid filter paper bridge methods were either ineffective or had a detrimental effect on embryogenic callus formation, development and regeneration. Immature embryos sourced during summer were found to be far more responsive in vitro than those sourced during winter. In an attempt to overcome this problem, IEs were sourced from sorghum grown under summer conditions in either a temperature controlled glasshouse or a growth chamber. However, the performance of these explants was still inferior to that of natural summer-sourced explants. Leaf whorls, mature embryos, shoot tips and leaf primordia were found to be unsuitable as explants for establishing ECLs in sorghum cultivars SA281 and 296B. Using the florets of immature inflorescences (IFs) as explants, however, ECLs were established and regenerated for these cultivars, as well as for cultivar Tx430, using callus induction media, SCIM, and regeneration media, VWRM. The best in vitro responses, from the largest possible sized IFs, were obtained using plants at the FL-2 stage (where the last fully opened leaf was two leaves away from the flag leaf). Immature inflorescences could be stored at 25oC for up to three days without affecting their in vitro responses. Compared to IEs, the IFs were more robust in tissue culture and showed responses which were season and growth condition independent. A micropropagation protocol for sorghum was developed in this study. The optimum plant growth regulator (PGR) combination for the micropropagation of in vitro regenerated plantlets was found to be 1.0 mg/L BAP in combination with 0.5 mg/L NAA. With this protocol, cultivars 296B and SA281 produced an average of 57 and 13 off-shoots per plantlet, respectively. The plantlets were successfully acclimatised and developed into phenotypically normal plants that set seeds. A simplified acclimatisation protocol for in vitro regenerated plantlets was also developed. This protocol involved deflasking in vitro plantlets with at least 2 fully-opened healthy leaves and at least 3 roots longer than 1.5 cm, washing the media from the roots with running tap water, planting in 100 mm pots and placing in plastic trays covered with a clear plastic bag in a plant growth chamber. After seven days, the corners of the plastic cover were opened and the bags were completely removed after 10 days. All plantlets were successfully acclimatised regardless of whether 1:1 perlite:potting mix, potting mix, UC mix or vermiculite were used as potting substrates. Parameters were optimised for Agrobacterium-mediated transformation (AMT) of cultivars SA281, 296B and Tx430. The optimal conditions were the use of Agrobacterium strain LBA4404 at an inoculum density of 0.5 OD600nm, heat shock at 43oC for 3 min, use of the surfactant Pluronic F-68 (0.02% w/v) in the inoculation media with a pH of 5.2 and a 3 day co-cultivation period in dark at 22oC. Using these parameters, high frequencies of transient GFP expression was observed in IEs precultured on callus initiation media for 1-7 days as well as in four weeks old IE- and IF-derived callus. Cultivar SA281 appeared very sensitive to Agrobacterium since all tissue turned necrotic within two weeks post-exposure. For cultivar 296B, GFP expression was observed up to 20 days post co-cultivation but no stably transformed plants were regenerated. Using cultivar Tx430, GFP was expressed for up to 50 days post co-cultivation. Although no stably transformed plants of this cultivar were regenerated, this was most likely due to the use of unsuitable regeneration media. Parameters were optimised for transformation by particle bombardment (PB) of cultivars SA281, 296B and Tx430. The optimal conditions were use of 3-7 days old IEs and 4 weeks old IF callus, 4 hour pre- and post-bombardment osmoticum treatment, use of 0.6 µm gold microparticles, helium pressure of 1500 kPa and target distance of 15 cm. Using these parameters for PB, transient GFP expression was observed for up to 14, 30 and 50 days for cultivars SA281, 296B and Tx430, respectively. Further, the use of PB resulted in less tissue necrosis compared to AMT for the respective cultivars. Despite the presence of transient GFP expression, no stably transformed plants were regenerated. The establishment of regenerable ECLs and the optimization of AMT and PB parameters in this study provides a platform for future efforts to develop an efficient transformation protocol for sorghum. The development of GM sorghum will be an important step towards improving its agronomic properties as well as its exploitation for biofuel production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.