964 resultados para display
Resumo:
This project investigated ways in which the learning experience for students in Australian law schools could be enhanced by renewing final year legal curriculum through the design of effective capstone experiences to close the loop on tertiary legal studies and better prepare students for a smooth transition into the world of work and professional practice. Key project outcomes are a set of final year curriculum design principles and a transferable model for an effective final year program – a final year Toolkit comprising a range of templates, models and specific capstone examples for adoption or adaptation by legal educators. The project found that the efficacy of capstone experiences is affected by the curriculum context within which they are offered. For this reason, a number of ‘favourable conditions’, which promote the effectiveness of capstone experiences, have also been identified. The project’s final year principles and Toolkit promote program coherence and integration, should increase student satisfaction and levels of engagement with their experience of legal education and make a valuable contribution to assurance of learning in the new Tertiary Education Quality and Standards Agency (TEQSA) environment. From the point of view of the student experience, the final year principles and models address the current fragmented approach to final year legal curricula design and delivery. The knowledge and research base acquired under the auspices of this project is of both discipline and national importance as the project’s outcomes are transferable and have the potential to significantly influence the quality and coherence of the program experience of final year students in other tertiary disciplines, both within Australia and beyond. Project outcomes and deliverables are available on both the project’s website http://wiki.qut.edu.au/display/capstone/Home and on the Law Capstone Experience Forum website http://www.lawcapstoneexperience.com/. In the course of developing its deliverables, the project found that the design of capstone experiences varies significantly within and across disciplines; different frameworks may be used (for example, a disciplinary or inter-disciplinary focus, or to satisfy professional accreditation requirements), rationales and objectives may differ, and a variety of models utilised (for example, an integrated final year program, a single subject, a suite of subjects, or modules within several subjects). Broadly however, capstone experiences should provide final year students with an opportunity both to look back over their academic learning, in an effort to make sense of what they have accomplished, and to look forward to their professional and personal futures that build on that foundational learning.
Resumo:
Gifted students who have a reading disability have learning characteristics that set them apart from their peers. The ability to read impacts upon all areas of the formal curriculum in which print-based texts are common. Therefore, the full intellectual development of gifted students with a reading disability can be repressed because their access to learning opportunities is reduced. When the different learning needs caused by concomitant giftedness and reading disability are not met, it can have serious implications for both academic achievement and the social-emotional wellbeing of these students. In order to develop a deeper understanding of this vulnerable group of students, this study investigated the learning characteristics of gifted students with a reading disability. Furthermore, it investigated how the learning characteristics of these students impact upon their lived experiences. Since achievement and motivation have been shown to be closely linked to self-efficacy, self-efficacy theory underpinned the conceptual framework of the study. The study used a descriptive case study approach to document the lived experiences of gifted students with a reading disability. Nine participants aged between 11 and 18, who were formally identified as gifted with a reading disability, took part in the study. Data sources in the case study database included: cognitive assessments, such as WISC assessments, Stanford Binet 5, or the Raven's Standard Progressive Matrices; the WIAT II reading assessment; the Reader Self-Perception Scale; document reviews; parent and teacher checklists designed to gain information about the students' learning characteristics; and semi-structured interviews with students. The study showed that gifted students with a reading disability display a complex profile of learning strengths and weaknesses. As a result, they face a daily struggle of trying to reconcile the confusion of being able to complete some tasks to a high level, while struggling to read. The study sheds light on the myriad of issues faced by the students at school. It revealed that when the particular learning characteristics and needs of gifted students with a reading disability are recognised and met, these students can experience academic success, and avoid the serious social-emotional complications cited in previous studies. Indeed, rather than suffering from depression, disengagement from learning, and demotivation, these students were described as resilient, independent, determined, goal oriented and motivated to learn and persevere. Notably, the students in the study had developed effective coping strategies for dealing with the daily challenges they faced. These strategies are outlined in the thesis together with the advice students offered for helping other gifted students with a reading disability to succeed. Their advice is significant for all teachers who wish to nurture the potential of those students who face the challenge of being gifted with a reading disability, and for the parents of these students. This research advances knowledge pertaining to the theory of self-efficacy, and self-efficacy in reading specifically, by showing that although gifted students with a reading disability have low self-efficacy, the level is not the same for all aspects of reading. Furthermore, despite low self-efficacy in reading these students remained motivated. The study also enhances existing knowledge in the areas of gifted education and special education because it documents the lived experience of gifted students with a specific learning disability in reading from the students' perspectives. Based on a synthesis of the literature and research findings, an Inclusive Pathway Model is proposed that describes a framework to support gifted students with a reading disability so that they might achieve, and remain socially and emotionally well-adjusted. The study highlights the importance of clear identification protocols (such as the use of a range of assessment sources, discussions with students and parents, and an awareness of the characteristics of gifted students with a reading disability) and support mechanisms for assisting students (for example, differentiated reading instruction and the use of assistive technology).
Resumo:
This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.
Resumo:
The 2 hour game jam was performed as part of the State Library of Queensland 'Garage Gamer' series of events, summer 2013, at the SLQ exhibition. An aspect of the exhibition was the series of 'Level Up' game nights. We hosted the first of these - under the auspices of brIGDA, Game On. It was a party - but the focal point of the event was a live streamed 2 hour game jam. Game jams have become popular amongst the game development and design community in recent years, particularly with the growth of the Global Game Jam, a yearly event which brings thousands of game makers together across different sites in different countries. Other established jams take place on-line, for example the Ludum Dare challenge which as been running since 2002. Other challenges follow the same model in more intimate circumstances and it is now common to find institutions and groups holding their own small local game making jams. There are variations around the format, some jams are more competitive than others for example, but a common aspect is the creation of an intense creative crucible centred around team work and ‘accelerated game development’. Works (games) produced during these intense events often display more experimental qualities than those undertaken as commercial projects. In part this is because the typical jam is started with a conceptual design brief, perhaps a single word, or in the case of the specific game jam described in this paper, three words. Teams have to envision the challenge key word/s as a game design using whatever skills and technologies they can and produce a finished working game in the time given. Game jams thus provide design researchers with extraordinary fodder and recent years have also seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). The 2 hour game jam held during the SLQ Garage Gamer program was all about social experience.
Resumo:
Asset service organisations often recognize asset management as a core competence to deliver benefits to their business. But how do organizations know whether their asset management processes are adequate? Asset management maturity models, which combine best practices and competencies, provide a useful approach to test the capacity of organisations to manage their assets. Asset management frameworks are required to meet the dynamic challenges of managing assets in contemporary society. Although existing models are subject to wide variations in their implementation and sophistication, they also display a distinct weakness in that they tend to focus primarily on the operational and technical level and neglect the levels of strategy, policy and governance as well as the social and human resources – the people elements. Moreover, asset management maturity models have to respond to the external environmental factors, including such as climate change and sustainability, stakeholders and community demand management. Drawing on five dimensions of effective asset management – spatial, temporal, organisational, statistical, and evaluation – as identified by Amadi Echendu et al. [1], this paper carries out a comprehensive comparative analysis of six existing maturity models to identify the gaps in key process areas. Results suggest incorporating these into an integrated approach to assess the maturity of asset-intensive organizations. It is contended that the adoption of an integrated asset management maturity model will enhance effective and efficient delivery of services.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.
Resumo:
Custom designed for display on the Cube Installation situated in the new Science and Engineering Centre (SEC) at QUT, the ECOS project is a playful interface that uses real-time weather data to simulate how a five-star energy building operates in climates all over the world. In collaboration with the SEC building managers, the ECOS Project incorporates energy consumption and generation data of the building into an interactive simulation, which is both engaging to users and highly informative, and which invites play and reflection on the roles of green buildings. ECOS focuses on the principle that humans can have both a positive and negative impact on ecosystems with both local and global consequence. The ECOS project draws on the practice of Eco-Visualisation, a term used to encapsulate the important merging of environmental data visualization with the philosophy of sustainability. Holmes (2007) uses the term Eco-Visualisation (EV) to refer to data visualisations that ‘display the real time consumption statistics of key environmental resources for the goal of promoting ecological literacy’. EVs are commonly artifacts of interaction design, information design, interface design and industrial design, but are informed by various intellectual disciplines that have shared interests in sustainability. As a result of surveying a number of projects, Pierce, Odom and Blevis (2008) outline strategies for designing and evaluating effective EVs, including ‘connecting behavior to material impacts of consumption, encouraging playful engagement and exploration with energy, raising public awareness and facilitating discussion, and stimulating critical reflection.’ Consequently, Froehlich (2010) and his colleagues also use the term ‘Eco-feedback technology’ to describe the same field. ‘Green IT’ is another variation which Tomlinson (2010) describes as a ‘field at the juncture of two trends… the growing concern over environmental issues’ and ‘the use of digital tools and techniques for manipulating information.’ The ECOS Project team is guided by these principles, but more importantly, propose an example for how these principles may be achieved. The ECOS Project presents a simplified interface to the very complex domain of thermodynamic and climate modeling. From a mathematical perspective, the simulation can be divided into two models, which interact and compete for balance – the comfort of ECOS’ virtual denizens and the ecological and environmental health of the virtual world. The comfort model is based on the study of psychometrics, and specifically those relating to human comfort. This provides baseline micro-climatic values for what constitutes a comfortable working environment within the QUT SEC buildings. The difference between the ambient outside temperature (as determined by polling the Google Weather API for live weather data) and the internal thermostat of the building (as set by the user) allows us to estimate the energy required to either heat or cool the building. Once the energy requirements can be ascertained, this is then balanced with the ability of the building to produce enough power from green energy sources (solar, wind and gas) to cover its energy requirements. Calculating the relative amount of energy produced by wind and solar can be done by, in the case of solar for example, considering the size of panel and the amount of solar radiation it is receiving at any given time, which in turn can be estimated based on the temperature and conditions returned by the live weather API. Some of these variables can be altered by the user, allowing them to attempt to optimize the health of the building. The variables that can be changed are the budget allocated to green energy sources such as the Solar Panels, Wind Generator and the Air conditioning to control the internal building temperature. These variables influence the energy input and output variables, modeled on the real energy usage statistics drawn from the SEC data provided by the building managers.
Resumo:
Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.
Resumo:
Background Foot ulcers are a leading cause of avoidable hospital admissions and lower extremity amputations. However, large clinical studies describing foot ulcer presentations in the ambulatory setting are limited. The aim of this descriptive observational paper is to report the characteristics of ambulatory foot ulcer patients managed across 13 of 17 Queensland Health & Hospital Services. Methods Data on all foot ulcer patients registered with a Queensland High Risk Foot Form (QHRFF) was collected at their first consult in 2012. Data is automatically extracted from each QHRFF into a Queensland high risk foot database. Descriptive statistics display age, sex, ulcer types and co-morbidities. Statewide clinical indicators of foot ulcer management are also reported. Results Overall, 2,034 people presented with a foot ulcer in 2012. Mean age was 63(±14) years and 67.8% were male. Co-morbidities included 85% had diabetes, 49.7% hypertension, 39.2% dyslipidaemia, 25.6% cardiovascular disease, 13.7% kidney disease and 12.2% smoking. Foot ulcer types included 51.6% neuropathic, 17.8% neuro-ischaemic, 7.2% ischaemic, 6.6% post-surgical and 16.8% other; whilst 31% were infected. Clinical indicator results revealed 98% had their wound categorised, 51% received non-removable offloading, median ulcer healing time was 6-weeks and 37% had ulcer recurrence. Conclusion This paper details the largest foot ulcer database reported in Australia. People presenting with foot ulcers appear predominantly older, male with several co-morbidities. Encouragingly it appears most patients are receiving best practice care. These results may be a factor in the significant reduction of Queensland diabetes foot-related hospitalisations and amputations recently reported.
Resumo:
Price based technique is one way to handle increase in peak demand and deal with voltage violations in residential distribution systems. This paper proposes an improved real time pricing scheme for residential customers with demand response option. Smart meters and in-home display units are used to broadcast the price and appropriate load adjustment signals. Customers are given an opportunity to respond to the signals and adjust the loads. This scheme helps distribution companies to deal with overloading problems and voltage issues in a more efficient way. Also, variations in wholesale electricity prices are passed on to electricity customers to take collective measure to reduce network peak demand. It is ensured that both customers and utility are benefitted by this scheme.
Resumo:
Purpose Many contact lens (CL) manufacturers produce simultaneous-image lenses in which power varies either smoothly or discontinuously with zonal radius. We present in vitro measurements of some recent CLs and discuss how power profiles might be approximated in terms of nominal distance corrections, near additions, and on-eye visual performance. Methods Fully hydrated soft, simultaneous-image CLs from four manufacturers (Air Optix AQUA, Alcon; PureVision multifocal, Bausch & Lomb; Acuvue OASYS for Presbyopia, Vistakon; Biofinity multifocal- ‘‘D’’ design, Cooper Vision) were measured with a Phase focus Lens Profiler (Phase Focus Ltd., Sheffield,UK) in a wet cell and powerswere corrected to powers in air. All lenses had zero labeled power for distance. Results Sagittal power profiles revealed that the ‘‘low’’ add PureVision and Air Optix lenses exhibit smooth (parabolic) profiles, corresponding to negative spherical aberration. The ‘‘mid’’ and ‘‘high’’ add PureVision and Air Optix lenses have biaspheric designs, leading to different rates of power change for the central and peripheral portions. All OASYS lenses display a series of concentric zones, separated by abrupt discontinuities; individual profiles can be constrained between two parabolically decreasing curves, each giving a valid description of the power changes over alternate annular zones. Biofinity lenses have constant power over the central circular region of radius 1.5 mm, followed by an annular zone where the power increases approximately linearly, the gradient increasing with the add power, and finally an outer zone showing a slow, linear increase in power with a gradient being almost independent of the add power. Conclusions The variation in power across the simultaneous-image lenses produces enhanced depth of focus. The throughfocusnature of the image, which influences the ‘‘best focus’’ (distance correction) and the reading addition, will vary with several factors, including lens centration, the wearer’s pupil diameter, and ocular aberrations, particularly spherical aberration; visual performance with some designs may show greater sensitivity to these factors.
Resumo:
An on-road study was conducted to evaluate a complementary tactile navigation signal on driving behaviour and eye movements for drivers with hearing loss (HL) compared to drivers with normal hearing (NH). 32 participants (16 HL and 16 NH) performed two preprogrammed navigation tasks. In one, participants received only visual information, while the other also included a vibration in the seat to guide them in the correct direction. SMI glasses were used for eye tracking, recording the point of gaze within the scene. Analysis was performed on predefined regions. A questionnaire examined participant's experience of the navigation systems. Hearing loss was associated with lower speed, higher satisfaction with the tactile signal and more glances in the rear view mirror. Additionally, tactile support led to less time spent viewing the navigation display.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Technological advances have led to an ongoing spread of public displays in urban areas. However, they still mostly show passive content such as commercials and digital signage. Researchers took notice of their potential to spark situated civic discourse in public space and have begun working on interactive public display applications. Attracting people’s attention and providing a low barrier for user participation have been identified as major challenges in their design. This thesis presents Vote With Your Feet, a hyperlocal public polling tool for urban screens allowing users to express their opinions. Similar to vox populi interviews on TV or polls on news websites, the tool is meant to reflect the mindset of the community on topics such as current affairs, cultural identity and local matters. It shows one Yes/No question at a time and enables users to vote by stepping on one of two tangible buttons on the ground. This user interface was introduced to attract people’s attention and to lower participation barriers. Vote With Your Feet was informed by a user-centred design approach that included a focus group, expert interviews and extensive preliminary user studies in the wild. Deployed at a bus stop, Vote With Your Feet was evaluated in a field study over the course of several days. Observations of people and interviews with 30 participants revealed that the novel interaction technology was perceived as inviting and that Vote With Your Feet can spark discussions among co-located people.
Resumo:
For the last seventy-five years Grafton has celebrated the Jacaranda Festival in late October. The festival commences in the town square with the crowning of the Jacaranda Queen and ends a week later with a parade through the town. The event is now a major regional tourist attraction that aims to bring locals and visitors together to celebrate everything purple. During this week one can attend the jacaranda children's party, the jacaranda maypole dancing, the jacaranda choral service or the jacaranda organ recital. Local businesses are encouraged to compete in the decorated window displays competition and everyone can join in the procession. The festival pays homage to the extraordinary display of beautiful jacaranda blooms which carpet the city during this time. The festival was inaugurated in 1935 when the slow growing jacarandas planted in the late nineteenth and early twentieth centuries were coming to maturity...