924 resultados para Iterative probing
Resumo:
In this paper we propose a new method for face recognition using fractal codes. Fractal codes represent local contractive, affine transformations which when iteratively applied to range-domain pairs in an arbitrary initial image result in a fixed point close to a given image. The transformation parameters such as brightness offset, contrast factor, orientation and the address of the corresponding domain for each range are used directly as features in our method. Features of an unknown face image are compared with those pre-computed for images in a database. There is no need to iterate, use fractal neighbor distances or fractal dimensions for comparison in the proposed method. This method is robust to scale change, frame size change and rotations as well as to some noise, facial expressions and blur distortion in the image
An approach to statistical lip modelling for speaker identification via chromatic feature extraction
Resumo:
This paper presents a novel technique for the tracking of moving lips for the purpose of speaker identification. In our system, a model of the lip contour is formed directly from chromatic information in the lip region. Iterative refinement of contour point estimates is not required. Colour features are extracted from the lips via concatenated profiles taken around the lip contour. Reduction of order in lip features is obtained via principal component analysis (PCA) followed by linear discriminant analysis (LDA). Statistical speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments performed on the M2VTS1 database, show encouraging results
Resumo:
Objectives: This article reports on a culturally appropriate process of development of a smoke-free workplace policy within the peak Aboriginal Controlled Community Health Organisation in Victoria, Australia. Smoking is acknowledged as being responsible for at least 20% of all deaths in Aboriginal communities in Australia, and many Aboriginal health workers smoke. Methods: The smoke-free workplace policy was developed using the iterative, discursive and experience-based methodology of Participatory Action Research, combined with the culturally embedded concept of ‘having a yarn’. Results: Staff members initially identified smoking as a topic to be avoided within workplace discussions. This was due, in part, to grief (everyone had suffered a smoking related bereavement). Further, there was anxiety that discussing smoking would result in culturally difficult conflict. The use of yarning opened up a safe space for discussion and debate,enabling development of a policy that was accepted across the organisation. Conclusions: Within Aboriginal organisations, it is not sufficient to focus on the outcomes of policy development. Rather, due attention must be paid to the process employed in development of policy, particularly when that policy is directly related to an emotionally and communally weighted topic such as smoking.
Resumo:
As English increasingly becomes one of the most commonly spoken languages in the world today for a variety of economic, social and cultural reasons, education is impacted by globalisation, the internationalisation of universities and the diversity of learners in classrooms. The challenge for educators is to find more effective ways of teaching English language so that students are better able to create meaning and communicate in the target language as well as to transform knowledge and understanding into relevant skills for a rapidly changing world. This research focuses broadly on English language education underpinned by social constructivist principles informing communicative language teaching and in particular, interactive peer learning approaches. An intervention of interactive peer-based learning in two case study contexts of English as Foreign Language (EFL) undergraduates in a Turkish university and English as Second Language (ESL) undergraduates in an Australian university investigates what students gain from the intervention. Methodology utilising qualitative data gathered from student reflective logs, focus group interviews and researcher field notes emphasises student voice. The cross case comparative study indicates that interactive peer-based learning enhances a range of learning outcomes for both cohorts including engagement, communicative competence, diagnostic feedback as well as assisting development of inclusive social relationships, civic skills, confidence and self efficacy. The learning outcomes facilitate better adaptation to a new learning environment and culture. An iterative instructional matrix tool is a useful product of the research for first year university experiences, teacher training, raising awareness of diversity, building learning communities, and differentiating the curriculum. The study demonstrates that English language learners can experience positive impact through peer-based learning and thus holds an influential key for Australian universities and higher education.
Resumo:
This study examines the impact of utilising a Decision Support System (DSS) in a practical health planning study. Specifically, it presents a real-world case of a community-based initiative aiming to improve overall public health outcomes. Previous studies have emphasised that because of a lack of effective information, systems and an absence of frameworks for making informed decisions in health planning, it has become imperative to develop innovative approaches and methods in health planning practice. Online Geographical Information Systems (GIS) has been suggested as one of the innovative methods that will inform decision-makers and improve the overall health planning process. However, a number of gaps in knowledge have been identified within health planning practice: lack of methods to develop these tools in a collaborative manner; lack of capacity to use the GIS application among health decision-makers perspectives, and lack of understanding about the potential impact of such systems on users. This study addresses the abovementioned gaps and introduces an online GIS-based Health Decision Support System (HDSS), which has been developed to improve collaborative health planning in the Logan-Beaudesert region of Queensland, Australia. The study demonstrates a participatory and iterative approach undertaken to design and develop the HDSS. It then explores the perceived user satisfaction and impact of the tool on a selected group of health decision makers. Finally, it illustrates how decision-making processes have changed since its implementation. The overall findings suggest that the online GIS-based HDSS is an effective tool, which has the potential to play an important role in the future in terms of improving local community health planning practice. However, the findings also indicate that decision-making processes are not merely informed by using the HDSS tool. Instead, they seem to enhance the overall sense of collaboration in health planning practice. Thus, to support the Healthy Cities approach, communities will need to encourage decision-making based on the use of evidence, participation and consensus, which subsequently transfers into informed actions.
Resumo:
Conventional planning and decision making, with its sectoral and territorial emphasis and flat-map based processes are no longer adequate or appropriate for the increased complexity confronting airport/city interfaces. These crowed and often contested governance spaces demand a more iterative and relational planning and decision-making approach. Emergent GIS based planning and decision-making tools provide a mechanism which integrate and visually display an array of complex data, frameworks and scenarios/expectations, often in ‘real time’ computations. In so doing, these mechanisms provide a common ground for decision making and facilitate a more ‘joined-up’ approach to airport/city planning. This paper analyses the contribution of the Airport Metropolis Planning Support System (PSS) to sub-regional planning in the Brisbane Airport case environment.
Resumo:
Adults diagnosed with primary brain tumours often experience physical, cognitive and neuropsychiatric impairments and decline in quality of life. Although disease and treatment-related information is commonly provided to cancer patients and carers, newly diagnosed brain tumour patients and their carers report unmet information needs. Few interventions have been designed or proven to address these information needs. Accordingly, a three-study research program, that incorporated both qualitative and quantitative research methods, was designed to: 1) identify and select an intervention to improve the provision of information, and meet the needs of patients with a brain tumour; 2) use an evidence-based approach to establish the content, language and format for the intervention; and 3) assess the acceptability of the intervention, and the feasibility of evaluation, with newly diagnosed brain tumour patients. Study 1: Structured concept mapping techniques were undertaken with 30 health professionals, who identified strategies or items for improving care, and rated each of 42 items for importance, feasibility, and the extent to which such care was provided. Participants also provided data to interpret the relationship between items, which were translated into ‘maps’ of relationships between information and other aspects of health care using multidimensional scaling and hierarchical cluster analysis. Results were discussed by participants in small groups and individual interviews to understand the ratings, and facilitators and barriers to implementation. A care coordinator was rated as the most important strategy by health professionals. Two items directly related to information provision were also seen as highly important: "information to enable the patient or carer to ask questions" and "for doctors to encourage patients to ask questions". Qualitative analyses revealed that information provision was individualised, depending on patients’ information needs and preferences, demographic variables and distress, the characteristics of health professionals who provide information, the relationship between the individual patient and health professional, and influenced by the fragmented nature of the health care system. Based on quantitative and qualitative findings, a brain tumour specific question prompt list (QPL) was chosen for development and feasibility testing. A QPL consists of a list of questions that patients and carers may want to ask their doctors. It is designed to encourage the asking of questions in the medical consultation, allowing patients to control the content, and amount of information provided by health professionals. Study 2: The initial structure and content of the brain tumour specific QPL developed was based upon thematic analyses of 1) patient materials for brain tumour patients, 2) QPLs designed for other patient populations, and 3) clinical practice guidelines for the psychosocial care of glioma patients. An iterative process of review and refinement of content was undertaken via telephone interviews with a convenience sample of 18 patients and/or carers. Successive drafts of QPLs were sent to patients and carers and changes made until no new topics or suggestions arose in four successive interviews (saturation). Once QPL content was established, readability analyses and redrafting were conducted to achieve a sixth-grade reading level. The draft QPL was also reviewed by eight health professionals, and shortened and modified based on their feedback. Professional design of the QPL was conducted and sent to patients and carers for further review. The final QPL contained questions in seven colour-coded sections: 1) diagnosis; 2) prognosis; 3) symptoms and problems; 4) treatment; 5) support; 6) after treatment finishes; and 7) the health professional team. Study 3: A feasibility study was conducted to determine the acceptability of the QPL and the appropriateness of methods, to inform a potential future randomised trial to evaluate its effectiveness. A pre-test post-test design was used with a nonrandomised control group. The control group was provided with ‘standard information’, the intervention group with ‘standard information’ plus the QPL. The primary outcome measure was acceptability of the QPL to participants. Twenty patients from four hospitals were recruited a median of 1 month (range 0-46 months) after diagnosis, and 17 completed baseline and follow-up interviews. Six participants would have preferred to receive the information booklet (standard information or QPL) at a different time, most commonly at diagnosis. Seven participants reported on the acceptability of the QPL: all said that the QPL was helpful, and that it contained questions that were useful to them; six said it made it easier to ask questions. Compared with control group participants’ ratings of ‘standard information’, QPL group participants’ views of the QPL were more positive; the QPL had been read more times, was less likely to be reported as ‘overwhelming’ to read, and was more likely to prompt participants to ask questions of their health professionals. The results from the three studies of this research program add to the body of literature on information provision for brain tumour patients. Together, these studies suggest that a QPL may be appropriate for the neuro-oncology setting and acceptable to patients. The QPL aims to assist patients to express their information needs, enabling health professionals to better provide the type and amount of information that patients need to prepare for treatment and the future. This may help health professionals meet the challenge of giving patients sufficient information, without providing ‘too much’ or ‘unnecessary’ information, or taking away hope. Future studies with rigorous designs are now needed to determine the effectiveness of the QPL.
Resumo:
Emerging from the challenge to reduce energy consumption in buildings is a need for research and development into the more effective use of simulation as a decision-support tool. Despite significant research, persistent limitations in process and software inhibit the integration of energy simulation in early architectural design. This paper presents a green star case study to highlight the obstacles commonly encountered with current integration strategies. It then examines simulation-based design in the aerospace industry, which has overcome similar limitations. Finally, it proposes a design system based on this contrasting approach, coupling parametric modelling and energy simulation software for rapid and iterative performance assessment of early design options.
Resumo:
There is a need for decision support tools that integrate energy simulation into early design in the context of Australian practice. Despite the proliferation of simulation programs in the last decade, there are no ready-to-use applications that cater specifically for the Australian climate and regulations. Furthermore, the majority of existing tools focus on achieving interaction with the design domain through model-based interoperability, and largely overlook the issue of process integration. This paper proposes an energy-oriented design environment that both accommodates the Australian context and provides interactive and iterative information exchanges that facilitate feedback between domains. It then presents the structure for DEEPA, an openly customisable system that couples parametric modelling and energy simulation software as a means of developing a decision support tool to allow designers to rapidly and flexibly assess the performance of early design alternatives. Finally, it discusses the benefits of developing a dynamic and concurrent performance evaluation process that parallels the characteristics and relationships of the design process.
Resumo:
Many current HCI, social networking, ubiquitous computing, and context aware designs, in order for the design to function, have access to, or collect, significant personal information about the user. This raises concerns about privacy and security, in both the research community and main-stream media. From a practical perspective, in the social world, secrecy and security form an ongoing accomplishment rather than something that is set up and left alone. We explore how design can support privacy as practical action, and investigate the notion of collective information-practice of privacy and security concerns of participants of a mobile, social software for ride sharing. This paper contributes an understanding of HCI security and privacy tensions, discovered while “designing in use” using a Reflective, Agile, Iterative Design (RAID) method.
Resumo:
Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.