663 resultados para systematically conferred advantages
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.
Resumo:
Most online assessment systems now incorporate social networking features, and recent developments in social media spaces include protocols that allow the synchronisation and aggregation of data across multiple user profiles. In light of these advances and the concomitant fear of data sharing in secondary school education this papers provides important research findings about generic features of online social networking, which educators can use to make sound and efficient assessments in collaboration with their students and colleagues. This paper reports on a design experiment in flexible educational settings that challenges the dichotomous legacy of success and failure evident in many assessment activities for at-risk youth. Combining social networking practices with the sociology of education the paper proposes that assessment activities are best understood as a negotiable field of exchange. In this design experiment students, peers and educators engage in explicit, "front-end" assessment (Wyatt-Smith, 2008) to translate digital artefacts into institutional, and potentiality economic capital without continually referring to paper based pre-set criteria. This approach invites students and educators to use social networking functions to assess “work in progress” and final submissions in collaboration, and in doing so assessors refine their evaluative expertise and negotiate the value of student’s work from which new criteria can emerge. The mobile advantages of web-based technologies aggregate, externalise and democratise this transparent assessment model for most, if not all, student work that can be digitally represented.
Resumo:
Purpose: To investigate speed regulation during overground running on undulating terrain. Methods: Following an initial laboratory session to calculate physiological thresholds, eight experienced runners completed a spontaneously paced time trial over 3 laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Results: Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. 89% of group level speed was predicted using a modified gradient factor. Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Conclusions: Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain.
Resumo:
Purpose Multi-level diode-clamped inverters have the challenge of capacitor voltage balancing when the number of DC-link capacitors is three or more. On the other hand, asymmetrical DC-link voltage sources have been applied to increase the number of voltage levels without increasing the number of switches. The purpose of this paper is to show that an appropriate multi-output DC-DC converter can resolve the problem of capacitor voltage balancing and utilize the asymmetrical DC-link voltages advantages. Design/methodology/approach A family of multi-output DC-DC converters is presented in this paper. The application of these converters is to convert the output voltage of a photovoltaic (PV) panel to regulate DC-link voltages of an asymmetrical four-level diode-clamped inverter utilized for domestic applications. To verify the versatility of the presented topology, simulations have been directed for different situations and results are presented. Some related experiments have been developed to examine the capabilities of the proposed converters. Findings The three-output voltage-sharing converters presented in this paper have been mathematically analysed and proven to be appropriate to improve the quality of the residential application of PV by means of four-level asymmetrical diode-clamped inverter supplying highly resistive loads. Originality/value This paper shows that an appropriate multi-output DC-DC converter can resolve the problem of capacitor voltage balancing and utilize the asymmetrical DC-link voltages advantages and that there is a possibility of operation at high-modulation index despite reference voltage magnitude and power factor variations.
Resumo:
This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.
Resumo:
User-Based intelligent systems are already commonplace in a student’s online digital life. Each time they browse, search, buy, join, comment, play, travel, upload, download, a system collects, analyses and processes data in an effort to customise content and further improve services. This panel session will explore how intelligent systems, particularly those that gather data from mobile devices, can offer new possibilities to assist in the delivery of customised, personal and engaging learning experiences. The value of intelligent systems for education lies in their ability to formulate authentic and complex learner profiles that bring together and systematically integrate a student’s personal world with a formal curriculum framework. As we well know, a mobile device can collect data relating to a student’s interests (gathered from search history, applications and communications), location, surroundings and proximity to others (GPS, Bluetooth). However, what has been less explored is the opportunity for a mobile device to map the movements and activities of a student from moment to moment and over time. This longitudinal data provides a holistic profile of a student, their state and surroundings. Analysing this data may allow us to identify patterns that reveal a student’s learning processes; when and where they work best and for how long. Through revealing a student’s state and surroundings outside of schools hour, this longitudinal data may also highlight opportunities to transform a student’s everyday world into an inventory for learning, punctuating their surroundings with learning recommendations. This would in turn lead to new ways to acknowledge and validate and foster informal learning, making it legitimate within a formal curriculum.
Resumo:
Plants have been identified as promising expression systems for the commercial production of recombinant proteins. Plant-based protein production or “biofarming” offers a number of advantages over traditional expression systems in terms of scale of production, the capacity for post-translation processing, providing a product free of contaminants and cost effectiveness. A number of pharmaceutically important and commercially valuable proteins, such as antibodies, biopharmaceuticals and industrial enzymes are currently being produced in plant expression systems. However, several challenges still remain to improve recombinant protein yield with no ill effect on the host plant. The ability for transgenic plants to produce foreign proteins at commercially viable levels can be directly related to the level and cell specificity of the selected promoter driving the transgene. The accumulation of recombinant proteins may be controlled by a tissue-specific, developmentally-regulated or chemically-inducible promoter such that expression of recombinant proteins can be spatially- or temporally- controlled. The strict control of gene expression is particularly useful for proteins that are considered toxic and whose expression is likely to have a detrimental effect on plant growth. To date, the most commonly used promoter in plant biotechnology is the cauliflower mosaic virus (CaMV) 35S promoter which is used to drive strong, constitutive transgene expression in most organs of transgenic plants. Of particular interest to researchers in the Centre for Tropical Crops and Biocommodities at QUT are tissue-specific promoters for the accumulation of foreign proteins in the roots, seeds and fruit of various plant species, including tobacco, banana and sugarcane. Therefore this Masters project aimed to isolate and characterise root- and seed-specific promoters for the control of genes encoding recombinant proteins in plant-based expression systems. Additionally, the effects of matching cognate terminators with their respective gene promoters were assessed. The Arabidopsis root promoters ARSK1 and EIR1 were selected from the literature based on their reported limited root expression profiles. Both promoters were analysed using the PlantCARE database to identify putative motifs or cis-acting elements that may be associated with this activity. A number of motifs were identified in the ARSK1 promoter region including, WUN (wound-inducible), MBS (MYB binding site), Skn-1, and a RY core element (seed-specific) and in the EIR1 promoter region including, Skn-1 (seed-specific), Box-W1 (fungal elicitor), Aux-RR core (auxin response) and ABRE (ABA response). However, no previously reported root-specific cis-acting elements were observed in either promoter region. To confirm root specificity, both promoters, and truncated versions, were fused to the GUS reporter gene and the expression cassette introduced into Arabidopsis via Agrobacterium-mediated transformation. Despite the reported tissue-specific nature of these promoters, both upstream regulatory regions directed constitutive GUS expression in all transgenic plants. Further, similar levels of GUS expression from the ARSK1 promoter were directed by the control CaMV 35S promoter. The truncated version of the EIR1 promoter (1.2 Kb) showed some differences in the level of GUS expression compared to the 2.2 Kb promoter. Therefore, this suggests an enhancer element is contained in the 2.2 Kb upstream region that increases transgene expression. The Arabidopsis seed-specific genes ATS1 and ATS3 were selected from the literature based on their seed-specific expression profiles and gene expression confirmed in this study as seed-specific by RT-PCR analysis. The selected promoter regions were analysed using the PlantCARE database in order to identify any putative cis elements. The seed-specific motifs GCN4 and Skn-1 were identified in both promoter regions that are associated with elevated expression levels in the endosperm. Additionaly, the seed-specific RY element and the ABRE were located in the ATS1 promoter. Both promoters were fused to the GUS reporter gene and used to transform Arabidopsis plants. GUS expression from the putative promoters was consitutive in all transgenic Arabidopsis tissue tested. Importantly, the positive control FAE1 seed-specific promoter also directed constitutive GUS expression throughout transgenic Arabidopsis plants. The constitutive nature seen in all of the promoters used in this study was not anticipated. While variations in promoter activity can be caused by a number of influencing factors, the variation in promoter activity observed here would imply a major contributing factor common to all plant expression cassettes tested. All promoter constructs generated in this study were based on the binary vector pCAMBIA2300. This vector contains the plant selection gene (NPTII) under the transcriptional control of the duplicated CaMV 35S promoter. This CaMV 35S promoter contains two enhancer domains that confer strong, constitutive expression of the selection gene and is located immediately upstream of the promoter-GUS fusion. During the course of this project, Yoo et al. (2005) reported that transgene expression is significantly affected when the expression cassette is located on the same T-DNA as the 35S enhancer. It was concluded, the trans-acting effects of the enhancer activate and control transgene expression causing irregular expression patterns. This phenomenon seems the most plausible reason for the constitutive expression profiles observed with the root- and seed-specific promoters assessed in this study. The expression from some promoters can be influenced by their cognate terminator sequences. Therefore, the Arabidopsis ARSK1, EIR1, ATS1 and ATS3 terminator sequences were isolated and incorporated into expression cassettes containing the GUS reporter gene under the control of their cognate promoters. Again, unrestricted GUS activity was displayed throughout transgenic plants transformed with these reporter gene fusions. As previously discussed constitutive GUS expression was most likely due to the trans-acting effect of the upstream CaMV 35S promoter in the selection cassette located on the same T-DNA. The results obtained in this study make it impossible to assess the influence matching terminators with their cognate promoters have on transgene expression profiles. The obvious future direction of research continuing from this study would be to transform pBIN-based promoter-GUS fusions (ie. constructs containing no CaMV 35S promoter driving the plant selection gene) into Arabidopsis in order to determine the true tissue specificity of these promoters and evaluate the effects of their cognate 3’ terminator sequences. Further, promoter truncations based around the cis-elements identified here may assist in determining whether these motifs are in fact involved in the overall activity of the promoter.
Resumo:
The following exegesis will detail the key advantages and disadvantages of combining a traditional talk show genre with a linear documentary format using a small production team and a limited budget in a fast turnaround weekly environment. It will deal with the Australian Broadcasting Corporation series Talking Heads, broadcast weekly in the early evening schedule for the network at 18.30 with the presenter Peter Thompson. As Executive Producer for the programme at its inception I was responsible for setting it up for the ABC in Brisbane, a role that included selecting most of the team to work on the series and commissioning the music, titles and all other aspects required to bring the show to the screen. What emerged when producing this generic hybrid will be examined at length, including: „h The talk show/documentary hybrid format needs longer than 26¡¦30¡¨ to be entirely successful. „h The type of presenter ideally suited to the talk show/documentary format requires someone who is genuinely interested in their guests and flexible enough to maintain the format against tangential odds. „h The use of illustrative footage shot in a documentary style narrative improves the talk show format. iv „h The fast turnaround of the talk show/documentary hybrid puts tremendous pressure on the time frames for archive research and copyright clearance and therefore needs to be well-resourced. „h In a fast turnaround talk show/documentary format the field components are advantageous but require very low shooting ratios to be sustainable. „h An intimate set works best for a talk show hybrid like this. Also submitted are two DVDs of recordings of programmes I produced and directed from the first and third series. These are for consideration in the practical component of this project and reflect the changes that I made to the series.
Resumo:
Wideband frequency synthesisers have application in many areas, including test instrumentation and defence electronics. Miniaturisation of these devices provides many advantages to system designers, particularly in applications where extra space and weight are expensive. The purpose of this project was to miniaturise a wideband frequency synthesiser and package it for operation in several different environmental conditions while satisfying demanding technical specifications. The four primary and secondary goals to be achieved were: 1. an operating frequency range from low MHz to greater than 40 GHz, with resolution better than 1 MHz, 2. typical RF output power of +10 dBm, with maximum DC supply of 15 W, 3. synthesiser package of only 150 100 30 mm, and 4. operating temperatures from 20C to +71C, and vibration levels over 7 grms. This task was approached from multiple angles. Electrically, the system is designed to have as few functional blocks as possible. Off the shelf components are used for active functions instead of customised circuits. Mechanically, the synthesiser package is designed for efficient use of the available space. Two identical prototype synthesisers were manufactured to evaluate the design methodology and to show the repeatability of the design. Although further engineering development will improve the synthesiser’s performance, this project has successfully demonstrated a level of miniaturisation which sets a new benchmark for wideband synthesiser design. These synthesisers will meet the demands for smaller, lighter wideband sources. Potential applications include portable test equipment, radar and electronic surveillance systems on unmanned aerial vehicles. They are also useful for reducing the overall weight and power consumption of other systems, even if small dimensions are not essential.
Resumo:
This project aims to develop a methodology for designing and conducting a systems engineering analysis to build and fly continuously, day and night, propelled uniquely by solar energy for one week with a 0.25Kg payload consuming 0.5 watt without fuel or pollution. An airplane able to fly autonomously for many days could find many applications. Including coastal or border surveillance, atmospherical and weather research and prediction, environmental, forestry, agricultural, and oceanic monitoring, imaging for the media and real-estate industries, etc. Additional advantages of solar airplanes are their low cost and the simplicity with which they can be launched. For example, in the case of potential forest fire risks during a warm and dry period, swarms of solar airplanes, easily launched with the hand, could efficiently monitor a large surface, reporting rapidly any fire starts. This would allow a fast intervention and thus reduce the cost of such disaster, in terms of human and material losses. At higher dimension, solar HALE platforms are expected to play a major role as communication relays and could replace advantageously satellites in a near future.
Resumo:
The dynamic interaction between building systems and external climate is extremely complex, involving a large number of difficult-to-predict variables. In order to study the impact of climate change on the built environment, the use of building simulation techniques together with forecast weather data are often necessary. Since most of building simulation programs require hourly meteorological input data for their thermal comfort and energy evaluation, the provision of suitable weather data becomes critical. In this paper, the methods used to prepare future weather data for the study of the impact of climate change are reviewed. The advantages and disadvantages of each method are discussed. The inherent relationship between these methods is also illustrated. Based on these discussions and the analysis of Australian historic climatic data, an effective framework and procedure to generate future hourly weather data is presented. It is shown that this method is not only able to deal with different levels of available information regarding the climate change, but also can retain the key characters of a “typical” year weather data for a desired period.
Resumo:
The purpose of this chapter is to provide an overview of the development and use of clinical guidelines as a tool for decision making in clinical practice. Nurses have always developed and used tools to guide clinical decision making related to interventions in practice. Since Florence Nightingale (Nightingale 1860) gave us ‘notes’ on nursing in the late 1800s, nurses have continued to use tools, such as standards, policies and procedures, protocols, algorithms, clinical pathways and clinical guidelines, to assist them in making appropriate decisions about patient care that eventuate in the best desired patient outcomes. Clinical guidelines have enjoyed growing popularity as a comprehensive tool for synthesising clinical evidence and information into user-friendly recommendations for practice. Historically, clinical guidelines were developed by individual experts or groups of experts by consensus, with no transparent process for the user to determine the validity and reliability of the recommendations. The acceptance of the evidence-based practice (EBP) movement as a paradigm for clinical decision making underscores the imperative for clinical guidelines to be systematically developed and based on the best available research evidence. Clinicians are faced with the dilemma of choosing from an abundance of guidelines of variable quality, or developing new guidelines. Where do you start? How do you find an existing guideline to fit your practice? How do you know if a guideline is evidence-based, valid and reliable? Should you apply an existing guideline in your practice or develop a new guideline? How do you get clinicians to use the guidelines? How do you know if using the guideline will make any difference in care delivery or patient outcomes? Whatever the choice, the challenge lies in choosing or developing a clinical guideline that is credible as a decision-making tool for the delivery of quality, efficient and effective care. This chapter will address the posed questions through an exploration of the ins and outs of clinical guidelines, from development to application to evaluation.
Resumo:
We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.
Resumo:
he purpose of this study was to evaluate the comparative cost of treating alcohol dependence with either cognitive behavioral therapy (CBT) alone or CBT combined with naltrexone (CBT+naltrexone). Two hundred ninety-eight outpatients dependent on alcohol who were consecutively treated for alcohol dependence participated in this study. One hundred seven (36%) patients received adjunctive pharmacotherapy (CBT+naltrexone). The Drug Abuse Treatment Cost Analysis Program was used to estimate treatment costs. Adjunctive pharmacotherapy (CBT+naltrexone) introduced an additional treatment cost and was 54% more expensive than CBT alone. When treatment abstinence rates (36.1% CBT; 62.6% CBT+naltrexone) were applied to cost effectiveness ratios, CBT+naltrexone demonstrated an advantage over CBT alone. There were no differences between groups on a preference-based health measure (SF-6D). In this treatment center, to achieve 100 abstainers over a 12-week program, 280 patients require CBT compared with 160 CBT+naltrexone. The dominant choice was CBT+naltrexone based on modest economic advantages and significant efficiencies in the numbers needed to treat.
Resumo:
Principal Topic : Nascent entrepreneurship has drawn the attention of scholars in the last few years (Davidsson, 2006, Wagner, 2004). However, most studies have asked why firms are created focussing on questions such as what are the characteristics (Delmar and Davidsson, 2000) and motivations (Carter, Gartner, Shaver & Reynolds, 2004) of nascent entrepreneurs, or what are the success factors in venture creation (Davidsson & Honig; 2003; Delmar and Shane, 2004). In contrast, the question of how companies emerge is still in its infancy. On a theoretical side, effectuation, developed by Sarasvathy (2001) offers one view of the strategies that may be at work during the venture creation process. Causation, the theorized inverse to effectuation, may be described as a rational reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. In contrast, effectuation suggests that the future entrepreneur will develop her new venture in a more iterative way by selecting possibilities through flexibility and interaction with the market, affordability of loss of resources and time invested, development of pre-commitments and alliances from stakeholders. Another contrasting point is that causation is ''goal driven'' while an effectual approach is ''mean driven'' (Sarasvathy, 2001) One of the predictions of effectuation theory is effectuation is more likely to be used by entrepreneurs early in the venture creation process (Sarasvathy, 2001). However, this temporal aspect and the impact of the effectuation strategy on the venture outcomes has so far not been systematically and empirically tested on large samples. The reason behind this research gap is twofold. Firstly, few studies collect longitudinal data on emerging ventures at an early enough stage of development to avoid severe survivor bias. Second, the studies that collect such data have not included validated measures of effectuation. The research we are conducting attempts to partially fill this gap by combining an empirical investigation on a large sample of nascent and young firms with the effectuation/causation continuum as a basis (Sarasvathy, 2001). The objectives are to understand the strategies used by the firms during the creation process and measure their impacts on the firm outcomes. Methodology/Key Propositions : This study draws its data from the first wave of the CAUSEE project where 28,383 Australian households were randomly contacted by phone using a specific methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that are not operating yet) and 514 young firms (i.e., firms that have started operating from 2004) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 1108 ventures. In a likewise comprehensive follow-up 12 months later, 80% of the eligible cases completed the interview. The questionnaire contains specific sections designed to distinguish effectual and causal processes, innovation, gestation activities, business idea changes and ventures outcomes. The effectuation questions are based on the components of effectuation strategy as described by Sarasvathy (2001) namely: flexibility, affordable loss and pre-commitment from stakeholders. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The first two waves of data have will be used to test and compare the use of effectuation in the venture creation process. To increase the robustness of the results, temporal use of effectuation will be tested both directly and indirectly. 1. By comparing the use of effectuation in nascent and young firms from wave 1 to 2, we will be able to find out how effectuation is affected by time over a 12-month duration and if the stage of venture development has an impact on its use. 2. By comparing nascent ventures early in the creation process versus nascent ventures late in the creation process. Early versus late can be determined with the help of time-stamped gestation activity questions included in the survey. This will help us to determine the change on a small time scale during the creation phase of the venture. 3. By comparing nascent firms to young (already operational) firms. 4. By comparing young firms becoming operational in 2006 with those first becoming operational in 2004. Results and Implications : Wave 1 and 2 data have been completed and wave 2 is currently being checked and 'cleaned'. Analysis work will commence in September, 2009. This paper is expected to contribute to the body of knowledge on effectuation by measuring quantitatively its use and impact on nascent and young firms activities at different stages of their development. In addition, this study will also increase the understanding of the venture creation process by comparing over time nascent and young firms from a large sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the changes identified may be due to several factors and may not only be attributed to the use/not use of effectuation. Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent and young firms during their creation and early operating stages.