356 resultados para Random Number Generation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic The study of the origin and characteristics of venture ideas - or ''opportunities'' as they are often called - and their contextual fit are key research goals in entrepreneurship (Davidsson, 2004). We define venture idea as ''the core ideas of an entrepreneur about what to sell, how to sell, whom to sell and how an entrepreneur acquire or produce the product or service which he/she sells'' for the purpose of this study. When realized the venture idea becomes a ''business model''. Even though venture ideas are central to entrepreneurship yet its characteristics and their effect to the entrepreneurial process is mysterious. According to Schumpeter (1934) entrepreneurs could creatively destruct the existing market condition by introducing new product/service, new production methods, new markets, and new sources of supply and reorganization of industries. The introduction, development and use of new ideas are generally called as ''innovation'' (Damanpour & Wischnevsky, 2006) and ''newness'' is a property of innovation and is a relative term which means that the degree of unfamiliarity of venture idea either to a firm or to a market. However Schumpeter's (1934) discusses five different types of newness, indicating that type of newness is an important issue. More recently, Shane and Venkataraman (2000) called for research taking into consideration not only the variation of characteristics of individuals but also heterogeneity of venture ideas, Empirically, Samuelson (2001, 2004) investigated process differences between innovative venture ideas and imitative venture ideas. However, he used only a crude dichotomy regarding the venture idea newness. According to Davidsson, (2004) as entrepreneurs could introduce new economic activities ranging from pure imitation to being new to the entire world market, highlighting that newness is a matter of degree. Dahlqvist (2007) examined the venture idea newness and made and attempt at more refined assessment of the degree and type of newness of venture idea. Building on these predecessors our study refines the assessment of venture idea newness by measuring the degree of venture idea newness (new to the world, new to the market, substantially improved while not entirely new, and imitation) for four different types of newness (product/service, method of production, method of promotion, and customer/target market). We then related type and degree of newness to the pace of progress in nascent venturing process. We hypothesize that newness will slow down the business creation process. Shane & Venkataraman (2000) introduced entrepreneurship as the nexus of opportunities and individuals. In line with this some scholars has investigated the relationship between individuals and opportunities. For example Shane (2000) investigates the relatedness between individuals' prior knowledge and identification of opportunities. Shepherd & DeTinne (2005) identified that there is a positive relationship between potential financial reward and the identification of innovative venture ideas. Sarasvathy's 'Effectuation Theory'' assumes high degree of relatedness with founders' skills, knowledge and resources in the selection of venture ideas. However entrepreneurship literature is scant with analyses of how this relatedness affects to the progress of venturing process. Therefore, we assess the venture ideas' degree of relatedness to prior knowledge and resources, and relate these, too, to the pace of progress in nascent venturing process. We hypothesize that relatedness will increase the speed of business creation. Methodology For this study we will compare early findings from data collected through the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). CAUSEE is a longitudinal study whose primary objective is to uncover the factors that initiate, hinder and facilitate the process of emergence and development of new firms. Data were collected from a representative sample of some 30,000 households in Australia using random digit dialing (RDD) telephone survey interviews. Through the first round of data collection identified 600 entrepreneurs who are currently involved in the business start-up process. The unit of the analysis is the emerging venture, with the respondent acting as its spokesperson. The study methodology allows researchers to identify ventures in early stages of creation and to longitudinally follow their progression through data collection periods over time. Our measures of newness build on previous work by Dahlqvist (2007). Our adapted version was developed over two pre-tests with about 80 participants in each. The measures of relatedness were developed through the two rounds of pre-testing. The pace of progress in the venture creation process is assessed with the help of time-stamped gestation activities; a technique developed in the Panel Study of Entrepreneurial Dynamics (PSED). Results and Implications We hypothesized that venture idea newness slows down the venturing process whereas relatedness facilitates the venturing process. Results of 600 nascent entrepreneurs in Australia indicated that there is marginal support for the hypothesis that relatedness assists the gestation progress. Newness is significant but is the opposite sign to the hypothesized. The results give number of implications for researchers, business founders, consultants and policy makers in terms of better knowledge of the venture creation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) represents the first Australian study to employ and extend the longitudinal and large scale systematic research developed for the Panel Study of Entrepreneurial Dynamics (PSED) in the US (Gartner, Shaver, Carter and Reynolds, 2004; Reynolds, 2007). This research approach addresses several shortcomings of other data sets including under coverage; selection bias; memory decay and hindsight bias, and lack of time separation between the assessment of causes and their assumed effects (Johnson et al 2006; Davidsson 2006). However, a remaining problem is that any a random sample of start-ups will be dominated by low potential, imitative ventures. In recognition of this issue CAUSEE supplemented PSED-type random samples with theoretically representative samples of the 'high potential' emerging ventures employing a unique methodology using novel multiple screening criteria. We define new ''high-potential'' ventures as new entrepreneurial innovative ventures with high aspirations and potential for growth. This distinguishes them from those ''lifestyle'' imitative businesses that start small and remain intentionally small (Timmons, 1986). CAUSEE is providing the opportunity to explore, for the first time, if process and outcomes of high potentials differ from those of traditional lifestyle firms. This will allows us to compare process and outcome attributes of the random sample with the high potential over sample of new firms and young firms. The attributes in which we will examine potential differences will include source of funding, and internationalisation. This is interesting both in terms of helping to explain why different outcomes occur but also in terms of assistance to future policymaking, given that high growth potential firms are increasingly becoming the focus of government intervention in economic development policies around the world. The first wave of data of a four year longitudinal study has been collected using these samples, allowing us to also provide some initial analysis on which to continue further research. The aim of this paper therefore is to present some selected preliminary results from the first wave of the data collection, with comparisons of high potential with lifestyle firms. We expect to see owing to greater resource requirements and higher risk profiles, more use of venture capital and angel investment, and more internationalisation activity to assist in recouping investment and to overcome Australia's smaller economic markets Methodology/Key Propositions In order to develop the samples of 'high potential' in the NF and YF categories a set of qualification criteria were developed. Specifically, to qualify, firms as nascent or young high potentials, we used multiple, partly compensating screening criteria related to the human capital and aspirations of the founders as well as the novelty of the venture idea, and venture high technology. A variety of techniques were also employed to develop a multi level dataset of sources to develop leads and firm details. A dataset was generated from a variety of websites including major stakeholders including the Federal and State Governments, Australian Chamber of Commerce, University Commercialisation Offices, Patent and Trademark Attorneys, Government Awards and Industry Awards in Entrepreneurship and Innovation, Industry lead associations, Venture Capital Association, Innovation directories including Australian Technology Showcase, Business and Entrepreneurs Magazines including BRW and Anthill. In total, over 480 industry, association, government and award sources were generated in this process. Of these, 74 discrete sources generated high potentials that fufilled the criteria. 1116 firms were contacted as high potential cases. 331 cases agreed to participate in the screener, with 279 firms (134 nascents, and 140 young firms) successfully passing the high potential criteria. 222 Firms (108 Nascents and 113 Young firms) completed the full interview. For the general sample CAUSEE conducts screening phone interviews with a very large number of adult members of households randomly selected through random digit dialing using screening questions which determine whether respondents qualify as 'nascent entrepreneurs'. CAUSEE additionally targets 'young firms' those that commenced trading from 2004 or later. This process yielded 977 Nascent Firms (3.4%) and 1,011 Young Firms (3.6%). These were directed to the full length interview (40-60 minutes) either directly following the screener or later by appointment. The full length interviews were completed by 594 NF and 514 YF cases. These are the cases we will use in the comparative analysis in this report. Results and Implications The results for this paper are based on Wave one of the survey which has been completed and the data obtained. It is expected that the findings will assist in beginning to develop an understanding of high potential nascent and young firms in Australia, how they differ from the larger lifestyle entrepreneur group that makes up the vast majority of the new firms created each year, and the elements that may contribute to turning high potential growth status into high growth realities. The results have implications for Government in the design of better conditions for the creation of new business, firms who assist high potentials in developing better advice programs in line with a better understanding of their needs and requirements, individuals who may be considering becoming entrepreneurs in high potential arenas and existing entrepreneurs make better decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The implementation of ‘good governance’ in Indonesia’s regional government sector became a central tenet in governance research following the introduction of the national code for governance in 2006. The code was originally drafted in 1999 as a response to the Asian financial crises and many cases of unearthed corruption, collusion, and nepotism. It was reviewed in 2001 and again in 2006 to incorporate relevant political, economical, and social developments. Even though the national code exists along with many regional government decrees on good governance, the extent of implementation of the tenets of good governance in Indonesia’s regional government is still questioned. Previous research on good governance implementation in Indonesian regional government (Mardiasmo, Barnes and Sakurai, 2008) identified differences in the nature and depth of implementation between various Indonesian regional governments. This paper analyses and extends this recent work and explores key factors that may impede the implementation and sustained application of governance practices across regional settings. The bureaucratic culture of Indonesian regional government is one that has been shaped for over approximately 30 years, in particular during that of the Soeharto regime. Previous research on this regime suggests a bureaucratic culture with a mix of positive and negative aspects. On one hand Soeharto’s regime resulted in strong development growth and strong economic fundamentals, resulting in Indonesia being recognised as one of the Asian economic tigers prior to the 1997 Asian financial crises. The financial crises however revealed a bureaucratic culture that was rife with corruption, collusion, and nepotism. Although subsequent Indonesian governments have been committed to eradicating entrenched practices it seems apparent that the culture is ingrained within the bureaucracy and eradication of it will take time. Informants from regional government agree with this observation, as they identify good governance as an innovative mechanism and to implement it will mean a deviation from the “old ways.” Thus there is a need for a “changed” mind set in order to implement sustained governance practices. Such an exercise has proven to be challenging so far, as there is “hidden” resistance from within the bureaucracy to change its ways. The inertia of such bureaucratic cultures forms a tension against the opportunity for the implementation of good governance. From this context an emergent finding is the existence of a ‘bureaucratic generation gap’ as an impeding variable to enhanced and more efficient implementation of governance systems. It was found that after the Asian financial crises the Indonesian government (both at national and regional level) drew upon a wider human resources pool to fill government positions – including entrants from academia, the private sector, international institutions, foreign nationals and new graduates. It suggested that this change in human capital within government is at the core of this ‘inter-generational divide.’ This divergence is exemplified, at one extreme, by [older] bureaucrats who have been in-position for long periods of time serving during the extended Soeharto regime. The “new” bureaucrats have only sat in their positions since the end of Asian financial crisis and did not serve during Soeharto’s regime. It is argued that the existence of this generation gap and associated aspects of organisational culture have significantly impeded modernising governance practices across regional Indonesia. This paper examines the experiences of government employees in five Indonesian regions: Solok, Padang, Gorontalo, Bali, and Jakarta. Each regional government is examined using a mixed methodology comprising of on-site observation, document analysis, and iterative semi-structured interviewing. Drawing from the experiences of five regional governments in implementing good governance this paper seeks to better understand the causal contexts of variable implementation governance practices and to suggest enhancements to the development of policies for sustainable inter-generational change in governance practice across regional government settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a formal model for certificateless authenticated key exchange (CL-AKE) protocols. Contrary to what might be expected, we show that the natural combination of an ID-based AKE protocol with a public key based AKE protocol cannot provide strong security. We provide the first one-round CL-AKE scheme proven secure in the random oracle model. We introduce two variants of the Diffie-Hellman trapdoor the introduced by \cite{DBLP:conf/eurocrypt/CashKS08}. The proposed key agreement scheme is secure as long as each party has at least one uncompromised secret. Thus, our scheme is secure even if the key generation centre learns the ephemeral secrets of both parties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Having flexible notions of the unit (e.g., 26 ones can be thought of as 2.6 tens, 1 ten 16 ones, 260 tenths, etc.) should be a major focus of elementary mathematics education. However, often these powerful notions are relegated to computations where the major emphasis is on "getting the right answer" thus procedural knowledge rather than conceptual knowledge becomes the primary focus. This paper reports on 22 high-performing students' reunitising processes ascertained from individual interviews on tasks requiring unitising, reunitising and regrouping; errors were categorised to depict particular thinking strategies. The results show that, even for high-performing students, regrouping is a cognitively complex task. This paper analyses this complexity and draws inferences for teaching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor vehicles are major emitters of gaseous and particulate pollution in urban areas, and exposure to particulate pollution can have serious health effects, ranging from respiratory and cardiovascular disease to mortality. Motor vehicle tailpipe particle emissions span a broad size range from 0.003-10µm, and are measured as different subsets of particle mass concentrations or particle number count. However, no comprehensive inventories currently exist in the international published literature covering this wide size range. This paper presents the first published comprehensive inventory of motor vehicle tailpipe particle emissions covering the full size range of particles emitted. The inventory was developed for urban South-East Queensland by combining two techniques from distinctly different disciplines, from aerosol science and transport modelling. A comprehensive set of particle emission factors were combined with traffic modelling, and tailpipe particle emissions were quantified for particle number (ultrafine particles), PM1, PM2.5 and PM10 for light and heavy duty vehicles and buses. A second aim of the paper involved using the data derived in this inventory for scenario analyses, to model the particle emission implications of different proportions of passengers travelling in light duty vehicles and buses in the study region, and to derive an estimate of fleet particle emissions in 2026. It was found that heavy duty vehicles (HDVs) in the study region were major emitters of particulate matter pollution, and although they contributed only around 6% of total regional vehicle kilometres travelled, they contributed more than 50% of the region’s particle number (ultrafine particles) and PM1 emissions. With the freight task in the region predicted to double over the next 20 years, this suggests that HDVs need to be a major focus of mitigation efforts. HDVs dominated particle number (ultrafine particles) and PM1 emissions; and LDV PM2.5 and PM10 emissions. Buses contributed approximately 1-2% of regional particle emissions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assess the increase in particle number emissions from motor vehicles driving at steady speed when forced to stop and accelerate from rest. Considering the example of a signalized pedestrian crossing on a two-way single-lane urban road, we use a complex line source method to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses and show that the total emissions during a red light is significantly higher than during the time when the light remains green. Replacing two cars with one bus increased the emissions by over an order of magnitude. Considering these large differences, we conclude that the importance attached to particle number emissions in traffic management policies be reassessed in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate Multiple-Input and Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems behavior in indoor populated environments that have line-of-site (LoS) between transmitter and receiver arrays. The in-house built MIMO-OFDM packet transmission demonstrator, equipped with four transmitters and four receivers, has been utilized to perform channel measurements at 5.2 GHz. Measurements have been performed using 0 to 3 pedestrians with different antenna arrays (2 £ 2, 3 £ 3 and 4 £ 4). The maximum average capacity for the 2x2 deterministic Fixed SNR scenario is 8.5 dB compared to the 4x4 deterministic scenario that has a maximum average capacity of 16.2 dB, thus an increment of 8 dB in average capacity has been measured when the array size increases from 2x2 to 4x4. In addition a regular variation has been observed for Random scenarios compared to the deterministic scenarios. An incremental trend in average channel capacity for both deterministic and random pedestrian movements has been observed with increasing number of pedestrian and antennas. In deterministic scenarios, the variations in average channel capacity are more noticeable than for the random scenarios due to a more prolonged and controlled body-shadowing effect. Moreover due to the frequent Los blocking and fixed transmission power a slight decrement have been observed in the spread between the maximum and minimum capacity with random fixed Tx power scenario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent times, complaining about the Y Generation and its perceived lack of work ethic has become standard dinner-party conversation amongst Baby Boomers. Discussions in the popular press (Salt, 2008) and amongst some social commentators (Levy, Carroll, Francoeur, & Logue, 2003) indicate that the group labelled Gen Y have distinct and different generational characteristics. Whether or not the differences are clearly delineated on age is still open to discussion but in the introduction to "The Generational Mirage? a pilot study into the perceptions of leadership by Generation X and Y", Levy et al. argue that "the calibre of leadership in competing organisations and the way they value new and existing employees will play a substantial role in attracting or discouraging these workers regardless of generational labels". Kunreuther's (2002) suggests that the difference between younger workers and their older counterparts may have more to do with situational phenomena and their position in the life cycle than deeper generational difference. However this is still an issue for leadership in schools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To examine the reliability of work-related activity coding for injury-related hospitalisations in Australia. Method: A random sample of 4373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across 4 states in Australia. From this sample, cases were identified as work-related if they contained an ICD-10-AM work-related activity code (U73) allocated by either: (i) the original coder; (ii) an independent auditor, blinded to the original code; or (iii) a research assistant, blinded to both the original and auditor codes, who reviewed narrative text extracted from the medical record. The concordance of activity coding and number of cases identified as work-related using each method were compared. Results: Of the 4373 cases sampled, 318 cases were identified as being work-related using any of the three methods for identification. The original coder identified 217 and the auditor identified 266 work-related cases (68.2% and 83.6% of the total cases identified, respectively). Around 10% of cases were only identified through the text description review. The original coder and auditor agreed on the assignment of work-relatedness for 68.9% of cases. Conclusions and Implications: The current best estimates of the frequency of hospital admissions for occupational injury underestimate the burden by around 32%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine, administrative data sources for a more complete identification of work-related injuries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The proportion of older individuals in the driving population is predicted to increase in the next 50 years. This has important implications for driving safety as abilities which are important for safe driving, such as vision (which accounts for the majority of the sensory input required for driving), processing ability and cognition have been shown to decline with age. The current methods employed for screening older drivers upon re-licensure are also vision based. This study, which investigated social, behavioural and professional aspects involved with older drivers, aimed to determine: (i) if the current visual standards in place for testing upon re-licensure are effective in reducing the older driver fatality rate in Australia; (ii) if the recommended visual standards are actually implemented as part of the testing procedures by Australian optometrists; and (iii) if there are other non-standardised tests which may be better at predicting the on-road incident-risk (including near misses and minor incidents) in older drivers than those tests recommended in the standards. Methods: For the first phase of the study, state-based age- and gender-stratified numbers of older driver fatalities for 2000-2003 were obtained from the Australian Transportation Safety Bureau database. Poisson regression analyses of fatality rates were considered by renewal frequency and jurisdiction (as separate models), adjusting for possible confounding variables of age, gender and year. For the second phase, all practising optometrists in Australia were surveyed on the vision tests they conduct in consultations relating to driving and their knowledge of vision requirements for older drivers. Finally, for the third phase of the study to investigate determinants of on-road incident risk, a stratified random sample of 600 Brisbane residents aged 60 years and were selected and invited to participate using an introductory letter explaining the project requirements. In order to capture the number and type of road incidents which occurred for each participant over 12 months (including near misses and minor incidents), an important component of the prospective research study was the development and validation of a driving diary. The diary was a tool in which incidents that occurred could be logged at that time (or very close in time to which they occurred) and thus, in comparison with relying on participant memory over time, recall bias of incident occurrence was minimised. Association between all visual tests, cognition and scores obtained for non-standard functional tests with retrospective and prospective incident occurrence was investigated. Results: In the first phase,rivers aged 60-69 years had a 33% lower fatality risk (Rate Ratio [RR] = 0.75, 95% CI 0.32-1.77) in states with vision testing upon re-licensure compared with states with no vision testing upon re-licensure, however, because the CIs are wide, crossing 1.00, this result should be regarded with caution. However, overall fatality rates and fatality rates for those aged 70 years and older (RR=1.17, CI 0.64-2.13) did not differ between states with and without license renewal procedures, indicating no apparent benefit in vision testing legislation. For the second phase of the study, nearly all optometrists measured visual acuity (VA) as part of a vision assessment for re-licensing, however, 20% of optometrists did not perform any visual field (VF) testing and only 20% routinely performed automated VF on older drivers, despite the standards for licensing advocating automated VF as part of the vision standard. This demonstrates the need for more effective communication between the policy makers and those responsible for carrying out the standards. It may also indicate that the overall higher driver fatality rate in jurisdictions with vision testing requirements is resultant as the tests recommended by the standards are only partially being conducted by optometrists. Hence a standardised protocol for the screening of older drivers for re-licensure across the nation must be established. The opinions of Australian optometrists with regard to the responsibility of reporting older drivers who fail to meet the licensing standards highlighted the conflict between maintaining patient confidentiality or upholding public safety. Mandatory reporting requirements of those drivers who fail to reach the standards necessary for driving would minimise potential conflict between the patient and their practitioner, and help maintain patient trust and goodwill. The final phase of the PhD program investigated the efficacy of vision, functional and cognitive tests to discriminate between at-risk and safe older drivers. Nearly 80% of the participants experienced an incident of some form over the prospective 12 months, with the total incident rate being 4.65/10 000 km. Sixty-three percent reported having a near miss and 28% had a minor incident. The results from the prospective diary study indicate that the current vision screening tests (VA and VF) used for re-licensure do not accurately predict older drivers who are at increased odds of having an on-road incident. However, the variation in visual measurements of the cohort was narrow, also affecting the results seen with the visual functon questionnaires. Hence a larger cohort with greater variability should be considered for a future study. A slightly lower cognitive level (as measured with the Mini-Mental State Examination [MMSE]) did show an association with incident involvement as did slower reaction time (RT), however the Useful-Field-of-View (UFOV) provided the most compelling results of the study. Cut-off values of UFOV processing (>23.3ms), divided attention (>113ms), selective attention (>258ms) and overall score (moderate/ high/ very high risk) were effective in determining older drivers at increased odds of having any on-road incident and the occurrence of minor incidents. Discussion: The results have shown that for the 60-69 year age-group, there is a potential benefit in testing vision upon licence renewal. However, overall fatality rates and fatality rates for those aged 70 years and older indicated no benefit in vision testing legislation and suggests a need for inclusion of screening tests which better predict on-road incidents. Although VA is routinely performed by Australian optometrists on older drivers renewing their licence, VF is not. Therefore there is a need for a protocol to be developed and administered which would result in standardised methods conducted throughout the nation for the screening of older drivers upon re-licensure. Communication between the community, policy makers and those conducting the protocol should be maximised. By implementing a standardised screening protocol which incorporates a level of mandatory reporting by the practitioner, the ethical dilemma of breaching patient confidentiality would also be resolved. The tests which should be included in this screening protocol, however, cannot solely be ones which have been implemented in the past. In this investigation, RT, MMSE and UFOV were shown to be better determinants of on-road incidents in older drivers than VA and VF, however, as previously mentioned, there was a lack of variability in visual status within the cohort. Nevertheless, it is the recommendation from this investigation, that subject to appropriate sensitivity and specificity being demonstrated in the future using a cohort with wider variation in vision, functional performance and cognition, these tests of cognition and information processing should be added to the current protocol for the screening of older drivers which may be conducted at licensing centres across the nation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurodegenerative disorders are heterogenous in nature and include a range of ataxias with oculomotor apraxia, which are characterised by a wide variety of neurological and ophthalmological features. This family includes recessive and dominant disorders. A subfamily of autosomal recessive cerebellar ataxias are characterised by defects in the cellular response to DNA damage. These include the well characterised disorders Ataxia-Telangiectasia (A-T) and Ataxia-Telangiectasia Like Disorder (A-TLD) as well as the recently identified diseases Spinocerebellar ataxia with axonal neuropathy Type 1 (SCAN1), Ataxia with Oculomotor Apraxia Type 2 (AOA2), as well as the subject of this thesis, Ataxia with Oculomotor Apraxia Type 1 (AOA1). AOA1 is caused by mutations in the APTX gene, which is located at chromosomal locus 9p13. This gene codes for the 342 amino acid protein Aprataxin. Mutations in APTX cause destabilization of Aprataxin, thus AOA1 is a result of Aprataxin deficiency. Aprataxin has three functional domains, an N-terminal Forkhead Associated (FHA) phosphoprotein interaction domain, a central Histidine Triad (HIT) nucleotide hydrolase domain and a C-terminal C2H2 zinc finger. Aprataxins FHA domain has homology to FHA domain of the DNA repair protein 5’ polynucleotide kinase 3’ phosphatase (PNKP). PNKP interacts with a range of DNA repair proteins via its FHA domain and plays a critical role in processing damaged DNA termini. The presence of this domain with a nucleotide hydrolase domain and a DNA binding motif implicated that Aprataxin may be involved in DNA repair and that AOA1 may be caused by a DNA repair deficit. This was substantiated by the interaction of Aprataxin with proteins involved in the repair of both single and double strand DNA breaks (XRay Cross-Complementing 1, XRCC4 and Poly-ADP Ribose Polymerase-1) and the hypersensitivity of AOA1 patient cell lines to single and double strand break inducing agents. At the commencement of this study little was known about the in vitro and in vivo properties of Aprataxin. Initially this study focused on generation of recombinant Aprataxin proteins to facilitate examination of the in vitro properties of Aprataxin. Using recombinant Aprataxin proteins I found that Aprataxin binds to double stranded DNA. Consistent with a role for Aprataxin as a DNA repair enzyme, this binding is not sequence specific. I also report that the HIT domain of Aprataxin hydrolyses adenosine derivatives and interestingly found that this activity is competitively inhibited by DNA. This provided initial evidence that DNA binds to the HIT domain of Aprataxin. The interaction of DNA with the nucleotide hydrolase domain of Aprataxin provided initial evidence that Aprataxin may be a DNA-processing factor. Following these studies, Aprataxin was found to hydrolyse 5’adenylated DNA, which can be generated by unscheduled ligation at DNA breaks with non-standard termini. I found that cell extracts from AOA1 patients do not have DNA-adenylate hydrolase activity indicating that Aprataxin is the only DNA-adenylate hydrolase in mammalian cells. I further characterised this activity by examining the contribution of the zinc finger and FHA domains to DNA-adenylate hydrolysis by the HIT domain. I found that deletion of the zinc finger ablated the activity of the HIT domain against adenylated DNA, indicating that the zinc finger may be required for the formation of a stable enzyme-substrate complex. Deletion of the FHA domain stimulated DNA-adenylate hydrolysis, which indicated that the activity of the HIT domain may be regulated by the FHA domain. Given that the FHA domain is involved in protein-protein interactions I propose that the activity of Aprataxins HIT domain may be regulated by proteins which interact with its FHA domain. We examined this possibility by measuring the DNA-adenylate hydrolase activity of extracts from cells deficient for the Aprataxin-interacting DNA repair proteins XRCC1 and PARP-1. XRCC1 deficiency did not affect Aprataxin activity but I found that Aprataxin is destabilized in the absence of PARP-1, resulting in a deficiency of DNA-adenylate hydrolase activity in PARP-1 knockout cells. This implies a critical role for PARP-1 in the stabilization of Aprataxin. Conversely I found that PARP-1 is destabilized in the absence of Aprataxin. PARP-1 is a central player in a number of DNA repair mechanisms and this implies that not only do AOA1 cells lack Aprataxin, they may also have defects in PARP-1 dependant cellular functions. Based on this I identified a defect in a PARP-1 dependant DNA repair mechanism in AOA1 cells. Additionally, I identified elevated levels of oxidized DNA in AOA1 cells, which is indicative of a defect in Base Excision Repair (BER). I attribute this to the reduced level of the BER protein Apurinic Endonuclease 1 (APE1) I identified in Aprataxin deficient cells. This study has identified and characterised multiple DNA repair defects in AOA1 cells, indicating that Aprataxin deficiency has far-reaching cellular consequences. Consistent with the literature, I show that Aprataxin is a nuclear protein with nucleoplasmic and nucleolar distribution. Previous studies have shown that Aprataxin interacts with the nucleolar rRNA processing factor nucleolin and that AOA1 cells appear to have a mild defect in rRNA synthesis. Given the nucleolar localization of Aprataxin I examined the protein-protein interactions of Aprataxin and found that Aprataxin interacts with a number of rRNA transcription and processing factors. Based on this and the nucleolar localization of Aprataxin I proposed that Aprataxin may have an alternative role in the nucleolus. I therefore examined the transcriptional activity of Aprataxin deficient cells using nucleotide analogue incorporation. I found that AOA1 cells do not display a defect in basal levels of RNA synthesis, however they display defective transcriptional responses to DNA damage. In summary, this thesis demonstrates that Aprataxin is a DNA repair enzyme responsible for the repair of adenylated DNA termini and that it is required for stabilization of at least two other DNA repair proteins. Thus not only do AOA1 cells have no Aprataxin protein or activity, they have additional deficiencies in PolyADP Ribose Polymerase-1 and Apurinic Endonuclease 1 dependant DNA repair mechanisms. I additionally demonstrate DNA-damage inducible transcriptional defects in AOA1 cells, indicating that Aprataxin deficiency confers a broad range of cellular defects and highlighting the complexity of the cellular response to DNA damage and the multiple defects which result from Aprataxin deficiency. My detailed characterization of the cellular consequences of Aprataxin deficiency provides an important contribution to our understanding of interlinking DNA repair processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION In their target article, Yuri Hanin and Muza Hanina outlined a novel multidisciplinary approach to performance optimisation for sport psychologists called the Identification-Control-Correction (ICC) programme. According to the authors, this empirically-verified, psycho-pedagogical strategy is designed to improve the quality of coaching and consistency of performance in highly skilled athletes and involves a number of steps including: (i) identifying and increasing self-awareness of ‘optimal’ and ‘non-optimal’ movement patterns for individual athletes; (ii) learning to deliberately control the process of task execution; and iii), correcting habitual and random errors and managing radical changes of movement patterns. Although no specific examples were provided, the ICC programme has apparently been successful in enhancing the performance of Olympic-level athletes. In this commentary, we address what we consider to be some important issues arising from the target article. We specifically focus attention on the contentious topic of optimization in neurobiological movement systems, the role of constraints in shaping emergent movement patterns and the functional role of movement variability in producing stable performance outcomes. In our view, the target article and, indeed, the proposed ICC programme, would benefit from a dynamical systems theoretical backdrop rather than the cognitive scientific approach that appears to be advocated. Although Hanin and Hanina made reference to, and attempted to integrate, constructs typically associated with dynamical systems theoretical accounts of motor control and learning (e.g., Bernstein’s problem, movement variability, etc.), these ideas required more detailed elaboration, which we provide in this commentary.