712 resultados para time-driven activity-based costing
Resumo:
Purpose Increased physical activity in colorectal cancer patients is related to improved recurrence free and overall survival. Psychological distress after cancer may place patients at risk of reduced physical activity; but paradoxically also act as a motivator for positive lifestyle change. The relationship between psychological distress and physical activity after cancer over time has not been described. Methods A prospective survey of 1966 (57% response) colorectal cancer survivors assessed the psychological distress variables of anxiety, depression, somatisation, cancer threat appraisal as predictors of physical activity five, 12, 24 and 36 months post-diagnosis 978 respondents had valid data for all time points. Results Higher somatisation was associated with greater physical inactivity (Relative risk ratio (RRR) =1.12; 95% CI=[1.1, 1.2]) and insufficient physical activity (RRR=1.05; [0.90, 1.0]). Respondents with a more positive appraisal of their cancer were significantly (p=0.031) less likely to be inactive (RRR=0.95; [0.90, 1.0]) or insufficiently active (RRR=0.96). Fatigued and obese respondents and current smokers were more inactive. Respondents whose somatisation increased between two time periods were less likely to increase their physical activity over the same period (p<0.001). Respondents with higher anxiety at one time period were less likely to have increased their activity at the next assessment (p=0.004). There was no association between depression and physical activity. Conclusions Cancer survivors who experience somatisation and anxiety are at greater risk of physical inactivity. The lack of a clear relationship between higher psychological distress and increasing physical activity argues against distress as a motivator to exercise in these patients.
Resumo:
The epidemic of obesity is impacting an increasing proportion of children, adolescents and adults with a common feature being low levels of physical activity (PA). Despite having more knowledge than ever before about the benefits of PA for health and the growth and development of youngsters, we are only paying lip-service to the development of motor skills in children. Fun, enjoyment and basic skills are the essential underpinnings of meaningful participation in PA. A concurrent problem is the reported increase in sitting time with the most common sedentary behaviors being TV viewing and other screen-based games. Limitations of time have contributed to a displacement of active behaviors with inactive pursuits, which has contributed to reductions in activity energy expenditure. To redress the energy imbalance in overweight and obese children, we urgently need out-of-the-box multisectoral solutions. There is little to be gained from a shame and blame mentality where individuals, their parents, teachers and other groups are singled out as causes of the problem. Such an approach does little more than shift attention from the main game of prevention and management of the condition, which requires a concerted, whole-of-government approach (in each country). The failure to support and encourage all young people to participate in regular PA will increase the chance that our children will live shorter and less healthy lives than their parents. In short, we need novel environmental approaches to foster a systematic increase in PA. This paper provides examples of opportunities and challenges for PA strategies to prevent obesity with a particular emphasis on the school and home settings.
Resumo:
Purpose: The Australian Women’s Activity Survey (AWAS) was developed based on a systematic review and qualitative research on how to measure activity patterns of women with young children (WYC). AWAS assesses activity performed across five domains (planned activities, employment, childcare, domestic responsibilities and transport), and intensity levels (sitting, light-intensity, brisk walking, moderate-intensity and vigorous-intensity) in a typical week in the past month. The purpose of this study was to assess the test-retest reliability and criterion validity of the AWAS. Methods: WYC completed the AWAS on two occasions 7-d apart (test-retest reliability protocol) and/or wore an MTI ActiGraph accelerometer for 7-d in between (validity protocol). Forty WYC (mean age 35 ± 5yrs) completed the test-retest reliability protocol and 75 WYC (mean age 33 ± 5yrs) completed the validity protocol. Interclass Correlation Coefficients (ICC) between AWAS administrations and Spearman’s Correlation Coefficients (rs) between AWAS and MTI data were calculated. Results: AWAS showed good test-retest reliability (ICC=0.80 (0.65-0.89)) and acceptable criterion validity (rs= 0.28, p=0.01) for measuring weekly health-enhancing physical activity. AWAS also provided repeatable and valid estimates of sitting time (test-retest reliability ICC=0.42 (0.13-0.64), and criterion validity (rs= 0.32, p=0.006)). Conclusion: The measurement properties of the AWAS are comparable to those reported for existing self-report measures of physical activity. However, AWAS offers a more comprehensive and flexible alternative for accurately assessing different domains and intensities of activity relevant to WYC. Future research should investigate whether the AWAS is a suitable measure of intervention efficacy by examining its sensitivity to change.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Over the past decade, plants have been used as expression hosts for the production of pharmaceutically important and commercially valuable proteins. Plants offer many advantages over other expression systems such as lower production costs, rapid scale up of production, similar post-translational modification as animals and the low likelihood of contamination with animal pathogens, microbial toxins or oncogenic sequences. However, improving recombinant protein yield remains one of the greatest challenges to molecular farming. In-Plant Activation (InPAct) is a newly developed technology that offers activatable and high-level expression of heterologous proteins in plants. InPAct vectors contain the geminivirus cis elements essential for rolling circle replication (RCR) and are arranged such that the gene of interest is only expressed in the presence of the cognate viral replication-associated protein (Rep). The expression of Rep in planta may be controlled by a tissue-specific, developmentally regulated or chemically inducible promoter such that heterologous protein accumulation can be spatially and temporally controlled. One of the challenges for the successful exploitation of InPAct technology is the control of Rep expression as even very low levels of this protein can reduce transformation efficiency, cause abnormal phenotypes and premature activation of the InPAct vector in regenerated plants. Tight regulation over transgene expression is also essential if expressing cytotoxic products. Unfortunately, many tissue-specific and inducible promoters are unsuitable for controlling expression of Rep due to low basal activity in the absence of inducer or in tissues other than the target tissue. This PhD aimed to control Rep activity through the production of single chain variable fragments (scFvs) specific to the motif III of Tobacco yellow dwarf virus (TbYDV) Rep. Due to the important role played by the conserved motif III in the RCR, it was postulated that such scFvs can be used to neutralise the activity of the low amount of Rep expressed from a “leaky” inducible promoter, thus preventing activation of the TbYDV-based InPAct vector until intentional induction. Such scFvs could also offer the potential to confer partial or complete resistance to TbYDV, and possibly heterologous viruses as motif III is conserved between geminiviruses. Studies were first undertaken to determine the levels of TbYDV Rep and TbYDV replication-associated protein A (RepA) required for optimal transgene expression from a TbYDV-based InPAct vector. Transient assays in a non-regenerable Nicotiana tabacum (NT-1) cell line were undertaken using a TbYDV-based InPAct vector containing the uidA reporter gene (encoding GUS) in combination with TbYDV Rep and RepA under the control of promoters with high (CaMV 35S) or low (Banana bunchy top virus DNA-R, BT1) activity. The replication enhancer protein of Tomato leaf curl begomovirus (ToLCV), REn, was also used in some co-bombardment experiments to examine whether RepA could be substituted by a replication enhancer from another geminivirus genus. GUS expression was observed both quantitatively and qualitatively by fluorometric and histochemical assays, respectively. GUS expression from the TbYDV-based InPAct vector was found to be greater when Rep was expected to be expressed at low levels (BT1 promoter) rather than high levels (35S promoter). GUS expression was further enhanced when Rep and RepA were co-bombarded with a low ratio of Rep to RepA. Substituting TbYDV RepA with ToLCV REn also enhanced GUS expression but more importantly highest GUS expression was observed when cells were co-transformed with expression vectors directing low levels of Rep and high levels of RepA irrespective of the level of REn. In this case, GUS expression was approximately 74-fold higher than that from a non-replicating vector. The use of different terminators, namely CaMV 35S and Nos terminators, in InPAct vectors was found to influence GUS expression. In the presence of Rep, GUS expression was greater using pInPActGUS-Nos rather than pInPActGUS-35S. The only instance of GUS expression being greater from vectors containing the 35S terminator was when comparing expression from cells transformed with Rep, RepA and REnexpressing vectors and either non-replicating vectors, p35SGS-Nos or p35SGS-35S. This difference was most likely caused by an interaction of viral replication proteins with each other and the terminators. These results indicated that (i) the level of replication associated proteins is critical to high transgene expression, (ii) the choice of terminator within the InPAct vector may affect expression levels and (iii) very low levels of Rep can activate InPAct vectors hence controlling its activity is critical. Prior to generating recombinant scFvs, a recombinant TbYDV Rep was produced in E. coli to act as a control to enable the screening for Rep-specific antibodies. A bacterial expression vector was constructed to express recombinant TbYDV Rep with an Nterminal His-tag (N-His-Rep). Despite investigating several purification techniques including Ni-NTA, anion exchange, hydrophobic interaction and size exclusion chromatography, N-His-Rep could only be partially purified using a Ni-NTA column under native conditions. Although it was not certain that this recombinant N-His-Rep had the same conformation as the native TbYDV Rep and was functional, results from an electromobility shift assay (EMSA) showed that N-His-Rep was able to interact with the TbYDV LIR and was, therefore, possibly functional. Two hybridoma cell lines from mice, immunised with a synthetic peptide containing the TbYDV Rep motif III amino acid sequence, were generated by GenScript (USA). Monoclonal antibodies secreted by the two hybridoma cell lines were first screened against denatured N-His-Rep in Western analysis. After demonstrating their ability to bind N-His-Rep, two scFvs (scFv1 and scFv2) were generated using a PCR-based approach. Whereas the variable heavy chain (VH) from both cell lines could be amplified, only the variable light chain (VL) from cell line 2 was amplified. As a result, scFv1 contained VH and VL from cell line 1, whereas scFv2 contained VH from cell line 2 and VL from cell line 1. Both scFvs were first expressed in E. coli in order to evaluate their affinity to the recombinant TbYDV N-His-Rep. The preliminary results demonstrated that both scFvs were able to bind to the denatured N-His-Rep. However, EMSAs revealed that only scFv2 was able to bind to native N-His-Rep and prevent it from interacting with the TbYDV LIR. Each scFv was cloned into plant expression vectors and co-bombarded into NT-1 cells with the TbYDV-based InPAct GUS expression vector and pBT1-Rep to examine whether the scFvs could prevent Rep from mediating RCR. Although it was expected that the addition of the scFvs would result in decreased GUS expression, GUS expression was found to slightly increase. This increase was even more pronounced when the scFvs were targeted to the cell nucleus by the inclusion of the Simian virus 40 large T antigen (SV40) nuclear localisation signal (NLS). It was postulated that the scFvs were binding to a proportion of Rep, leaving a small amount available to mediate RCR. The outcomes of this project provide evidence that very high levels of recombinant protein can theoretically be expressed using InPAct vectors with judicious selection and control of viral replication proteins. However, the question of whether the scFvs generated in this project have sufficient affinity for TbYDV Rep to prevent its activity in a stably transformed plant remains unknown. It may be that other scFvs with different combinations of VH and VL may have greater affinity for TbYDV Rep. Such scFvs, when expressed at high levels in planta, might also confer resistance to TbYDV and possibly heterologous geminiviruses.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Sustainable development is about making societal investments. These investments should be in synchronization with the natural environment, trends of social development, as well as organisational and local economies over a long time span. Traditionally in the eyes of clients, project development will need to produce the required profit margins, with some degrees of consideration for other impacts. This is being changed as all citizens of our society are becoming more aware of concepts and challenges such as the climate change, greenhouse footprints, and social dimensions of sustainability, and will in turn demand answers to these issues in built facilities. A large number of R&D projects have focused on the technical advancement and environmental assessment of products and built facilities. It is equally important address the cost/benefit issue, as developers in the world would not want to loose money by investing in built assets. For infrastructure projects, due to its significant cost of development and lengthy delivery time, presenting the full money story of going green is of vital importance. Traditional views of life-cycle costing tend to focus on the pure economics of a construction project. Sustainability concepts are not broadly integrated with the current LCCA in the construction sector. To rectify this problem, this paper reports on the progress to date of developing and extending contemporary LCCA models in the evaluation of road infrastructure sustainability. The suggested new model development is based on sustainability indicators identified through previous research, and incorporating industry verified cost elements of sustainability measures. The on-going project aims to design and a working model for sustainability life-cycle costing analysis for this type of infrastructure projects.
Resumo:
PURPOSE: To examine the association between neighborhood disadvantage and physical activity (PA). ---------- METHODS: We use data from the HABITAT multilevel longitudinal study of PA among mid-aged (40-65 years) men and women (n=11, 037, 68.5% response rate) living in 200 neighborhoods in Brisbane, Australia. PA was measured using three questions from the Active Australia Survey (general walking, moderate, and vigorous activity), one indicator of total activity, and two questions about walking and cycling for transport. The PA measures were operationalized using multiple categories based on time and estimated energy expenditure that were interpretable with reference to the latest PA recommendations. The association between neighborhood disadvantage and PA was examined using multilevel multinomial logistic regression and Markov Chain Monte Carlo simulation. The contribution of neighborhood disadvantage to between-neighborhood variation in PA was assessed using the 80% interval odds ratio. ---------- RESULTS: After adjustment for sex, age, living arrangement, education, occupation, and household income, reported participation in all measures and levels of PA varied significantly across Brisbane’s neighborhoods, and neighborhood disadvantage accounted for some of this variation. Residents of advantaged neighborhoods reported significantly higher levels of total activity, general walking, moderate, and vigorous activity; however, they were less likely to walk for transport. There was no statistically significant association between neighborhood disadvantage and cycling for transport. In terms of total PA, residents of advantaged neighborhoods were more likely to exceed PA recommendations. ---------- CONCLUSIONS: Neighborhoods may exert a contextual effect on residents’ likelihood of participating in PA. The greater propensity of residents in advantaged neighborhoods to do high levels of total PA may contribute to lower rates of cardiovascular disease and obesity in these areas
Resumo:
The Achilles tendon has been seen to exhibit time-dependent conditioning when isometric muscle actions were of a prolonged duration, compared to those involved in dynamic activities, such as walking. Since, the effect of short duration muscle activation associated with dynamic activities is yet to be established, the present study aimed to investigate the effect of incidental walking activity on Achilles tendon diametral strain. Eleven healthy male participants refrained from physical activity in excess of the walking required to carry out necessary daily tasks and wore an activity monitor during the 24 h study period. Achilles tendon diametral strain, 2 cm proximal to the calcaneal insertion, was determined from sagittal sonograms. Baseline sonographic examinations were conducted at ∼08:00 h followed by replicate examinations at 12 and 24 h. Walking activity was measured as either present (1) or absent (0) and a linear weighting function was applied to account for the proximity of walking activity to tendon examination time. Over the course of the day the median (min, max) Achilles tendon diametral strain was −11.4 (4.5, −25.4)%. A statistically significant relationship was evident between walking activity and diametral strain (P < 0.01) and this relationship improved when walking activity was temporally weighted (AIC 131 to 126). The results demonstrate that the short yet repetitive loads generated during activities of daily living, such as walking, are sufficient to induce appreciable time-dependant conditioning of the Achilles tendon. Implications arise for the in vivo measurement of Achilles tendon properties and the rehabilitation of tendinopathy.
Resumo:
Principal Topic : Nascent entrepreneurship has drawn the attention of scholars in the last few years (Davidsson, 2006, Wagner, 2004). However, most studies have asked why firms are created focussing on questions such as what are the characteristics (Delmar and Davidsson, 2000) and motivations (Carter, Gartner, Shaver & Reynolds, 2004) of nascent entrepreneurs, or what are the success factors in venture creation (Davidsson & Honig; 2003; Delmar and Shane, 2004). In contrast, the question of how companies emerge is still in its infancy. On a theoretical side, effectuation, developed by Sarasvathy (2001) offers one view of the strategies that may be at work during the venture creation process. Causation, the theorized inverse to effectuation, may be described as a rational reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. In contrast, effectuation suggests that the future entrepreneur will develop her new venture in a more iterative way by selecting possibilities through flexibility and interaction with the market, affordability of loss of resources and time invested, development of pre-commitments and alliances from stakeholders. Another contrasting point is that causation is ''goal driven'' while an effectual approach is ''mean driven'' (Sarasvathy, 2001) One of the predictions of effectuation theory is effectuation is more likely to be used by entrepreneurs early in the venture creation process (Sarasvathy, 2001). However, this temporal aspect and the impact of the effectuation strategy on the venture outcomes has so far not been systematically and empirically tested on large samples. The reason behind this research gap is twofold. Firstly, few studies collect longitudinal data on emerging ventures at an early enough stage of development to avoid severe survivor bias. Second, the studies that collect such data have not included validated measures of effectuation. The research we are conducting attempts to partially fill this gap by combining an empirical investigation on a large sample of nascent and young firms with the effectuation/causation continuum as a basis (Sarasvathy, 2001). The objectives are to understand the strategies used by the firms during the creation process and measure their impacts on the firm outcomes. Methodology/Key Propositions : This study draws its data from the first wave of the CAUSEE project where 28,383 Australian households were randomly contacted by phone using a specific methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that are not operating yet) and 514 young firms (i.e., firms that have started operating from 2004) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 1108 ventures. In a likewise comprehensive follow-up 12 months later, 80% of the eligible cases completed the interview. The questionnaire contains specific sections designed to distinguish effectual and causal processes, innovation, gestation activities, business idea changes and ventures outcomes. The effectuation questions are based on the components of effectuation strategy as described by Sarasvathy (2001) namely: flexibility, affordable loss and pre-commitment from stakeholders. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The first two waves of data have will be used to test and compare the use of effectuation in the venture creation process. To increase the robustness of the results, temporal use of effectuation will be tested both directly and indirectly. 1. By comparing the use of effectuation in nascent and young firms from wave 1 to 2, we will be able to find out how effectuation is affected by time over a 12-month duration and if the stage of venture development has an impact on its use. 2. By comparing nascent ventures early in the creation process versus nascent ventures late in the creation process. Early versus late can be determined with the help of time-stamped gestation activity questions included in the survey. This will help us to determine the change on a small time scale during the creation phase of the venture. 3. By comparing nascent firms to young (already operational) firms. 4. By comparing young firms becoming operational in 2006 with those first becoming operational in 2004. Results and Implications : Wave 1 and 2 data have been completed and wave 2 is currently being checked and 'cleaned'. Analysis work will commence in September, 2009. This paper is expected to contribute to the body of knowledge on effectuation by measuring quantitatively its use and impact on nascent and young firms activities at different stages of their development. In addition, this study will also increase the understanding of the venture creation process by comparing over time nascent and young firms from a large sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the changes identified may be due to several factors and may not only be attributed to the use/not use of effectuation. Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent and young firms during their creation and early operating stages.
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.
Resumo:
This approach to sustainable design explores the possibility of creating an architectural design process which can iteratively produce optimised and sustainable design solutions. Driven by an evolution process based on genetic algorithms, the system allows the designer to “design the building design generator” rather than to “designs the building”. The design concept is abstracted into a digital design schema, which allows transfer of the human creative vision into the rational language of a computer. The schema is then elaborated into the use of genetic algorithms to evolve innovative, performative and sustainable design solutions. The prioritisation of the project’s constraints and the subsequent design solutions synthesised during design generation are expected to resolve most of the major conflicts in the evaluation and optimisation phases. Mosques are used as the example building typology to ground the research activity. The spatial organisations of various mosque typologies are graphically represented by adjacency constraints between spaces. Each configuration is represented by a planar graph which is then translated into a non-orthogonal dual graph and fed into the genetic algorithm system with fixed constraints and expected performance criteria set to govern evolution. The resultant Hierarchical Evolutionary Algorithmic Design System is developed by linking the evaluation process with environmental assessment tools to rank the candidate designs. The proposed system generates the concept, the seed, and the schema, and has environmental performance as one of the main criteria in driving optimisation.
Resumo:
Background Colorectal cancer survivors may suffer from a range of ongoing psychosocial and physical problems that negatively impact on quality of life. This paper presents the study protocol for a novel telephone-delivered intervention to improve lifestyle factors and health outcomes for colorectal cancer survivors. Methods/Design Approximately 350 recently diagnosed colorectal cancer survivors will be recruited through the Queensland Cancer Registry and randomised to the intervention or control condition. The intervention focuses on symptom management, lifestyle and psychosocial support to assist participants to make improvements in lifestyle factors (physical activity, healthy diet, weight management, and smoking cessation) and health outcomes. Participants will receive up to 11 telephone-delivered sessions over a 6 month period from a qualified health professional or 'health coach'. Data collection will occur at baseline (Time 1), post-intervention or six months follow-up (Time 2), and at 12 months follow-up for longer term effects (Time 3). Primary outcome measures will include physical activity, cancer-related fatigue and quality of life. A cost-effective analysis of the costs and outcomes for survivors in the intervention and control conditions will be conducted from the perspective of health care costs to the government. Discussion The study will provide valuable information about an innovative intervention to improve lifestyle factors and health outcomes for colorectal cancer survivors.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
The detection of voice activity is a challenging problem, especially when the level of acoustic noise is high. Most current approaches only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to overcome this is to use the visual modality. The current state-of-the-art visual feature extraction technique is one that uses a cascade of visual features (i.e. 2D-DCT, feature mean normalisation, interstep LDA). In this paper, we investigate the effectiveness of this technique for the task of visual voice activity detection (VAD), and analyse each stage of the cascade and quantify the relative improvement in performance gained by each successive stage. The experiments were conducted on the CUAVE database and our results highlight that the dynamics of the visual modality can be used to good effect to improve visual voice activity detection performance.