907 resultados para SENSORLESS DRIVES
Resumo:
Muscle physiologists often describe fatigue simply as a decline of muscle force and infer this causes an athlete to slow down. In contrast, exercise scientists describe fatigue during sport competition more holistically as an exercise-induced impairment of performance. The aim of this review is to reconcile the different views by evaluating the many performance symptoms/measures and mechanisms of fatigue. We describe how fatigue is assessed with muscle, exercise or competition performance measures. Muscle performance (single muscle test measures) declines due to peripheral fatigue (reduced muscle cell force) and/or central fatigue (reduced motor drive from the CNS). Peak muscle force seldom falls by >30% during sport but is often exacerbated during electrical stimulation and laboratory exercise tasks. Exercise performance (whole-body exercise test measures) reveals impaired physical/technical abilities and subjective fatigue sensations. Exercise intensity is initially sustained by recruitment of new motor units and help from synergistic muscles before it declines. Technique/motor skill execution deviates as exercise proceeds to maintain outcomes before they deteriorate, e.g. reduced accuracy or velocity. The sensation of fatigue incorporates an elevated rating of perceived exertion (RPE) during submaximal tasks, due to a combination of peripheral and higher CNS inputs. Competition performance (sport symptoms) is affected more by decision-making and psychological aspects, since there are opponents and a greater importance on the result. Laboratory based decision making is generally faster or unimpaired. Motivation, self-efficacy and anxiety can change during exercise to modify RPE and, hence, alter physical performance. Symptoms of fatigue during racing, team-game or racquet sports are largely anecdotal, but sometimes assessed with time-motion analysis. Fatigue during brief all-out racing is described biomechanically as a decline of peak velocity, along with altered kinematic components. Longer sport events involve pacing strategies, central and peripheral fatigue contributions and elevated RPE. During match play, the work rate can decline late in a match (or tournament) and/or transiently after intense exercise bursts. Repeated sprint ability, agility and leg strength become slightly impaired. Technique outcomes, such as velocity and accuracy for throwing, passing, hitting and kicking, can deteriorate. Physical and subjective changes are both less severe in real rather than simulated sport activities. Little objective evidence exists to support exercise-induced mental lapses during sport. A model depicting mind-body interactions during sport competition shows that the RPE centre-motor cortex-working muscle sequence drives overall performance levels and, hence, fatigue symptoms. The sporting outputs from this sequence can be modulated by interactions with muscle afferent and circulatory feedback, psychological and decision-making inputs. Importantly, compensatory processes exist at many levels to protect against performance decrements. Small changes of putative fatigue factors can also be protective. We show that individual fatigue factors including diminished carbohydrate availability, elevated serotonin, hypoxia, acidosis, hyperkalaemia, hyperthermia, dehydration and reactive oxygen species, each contribute to several fatigue symptoms. Thus, multiple symptoms of fatigue can occur simultaneously and the underlying mechanisms overlap and interact. Based on this understanding, we reinforce the proposal that fatigue is best described globally as an exercise-induced decline of performance as this is inclusive of all viewpoints.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
This paper introduces our research on influencing the experience of people in urban public places through mobile mediated interactions. Information and communication technology (ICT) devices are sometimes used to create personal space while in public. ICT devices could also be utilised to digitally augment the urban space with non-privacy sensitive data enabling mobile mediated interactions in an anonymous way between collocated strangers. We present what motivates the research on digital augmentations and mobile mediated interactions between unknown urban dwellers, define the research problem that drives this study and why it is significant research in the field of pervasive social networking. The paper illustrates three design interventions enabling social pervasive content sharing and employing pervasive presence, awareness and anonymous social user interaction in urban public places. The paper concludes with an outlook and summarises the research effort.
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
The report card for the introductory programming unit at our university has historically been unremarkable in terms of attendance rates, student success rates and student retention in both the unit and the degree course. After a course restructure recently involving a fresh approach to introducing programming, we reported a high retention in the unit, with consistently high attendance and a very low failure rate. Following those encouraging results, we collected student attendance data for several semesters and compared attendance rates to student results. We have found that interesting workshop material which directly relates to course-relevant assessment items and therefore drives the learning, in an engaging collaborative learning environment has improved attendance to an extraordinary extent, with student failure rates plummeting to the lowest in recorded history at our university.
Resumo:
Recent years have seen a rapid increase in SMEs working collaboratively in inter-organizational projects. But what drives the emergence of such projects, and what types of industries breed them the most? To address these questions, this paper extends the long running literature on the firm and industry antecedents of new venturing and alliance formation to the domain of project-based organization by SMEs. Based on survey data collected among 1,725 small and medium sized organizations and longitudinal industry data, we find an overall pattern that indicates that IOPV participation is primarily determined by a focal SME’s scope of innovative activities, and the munificence, dynamism and complexity of its environment. Unexpectedly, these variables have different effects on whether SMEs are likely to engage in IOPVs, compared to with how many there are in their portfolio at a time. Implications for theory development are discussed.
Resumo:
This chapter proposes a conceptual model for optimal development of needed capabilities for the contemporary knowledge economy. We commence by outlining key capability requirements of the 21st century knowledge economy, distinguishing these from those suited to the earlier stages of the knowledge economy. We then discuss the extent to which higher education currently caters to these requirements and then put forward a new model for effective knowledge economy capability learning. The core of this model is the development of an adaptive and adaptable career identity, which is created through a reflective process of career self-management, drawing upon data from the self and the world of work. In turn, career identity drives the individual’s process of skill and knowledge acquisition, including deep disciplinary knowledge. The professional capability learning thus acquired includes disciplinary skill and knowledge sets, generic skills, and also skills for the knowledge economy, including disciplinary agility, social network capability, and enterprise skills. In the final part of this chapter, we envision higher education systems that embrace the model, and suggest steps that could be taken toward making the development of knowledge economy capabilities an integral part of the university experience.
Resumo:
This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of project management deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the project management literature in particular during the last few years. The globalization of businesses and worldwide Governmental/International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM deployment. Seventy-four countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the project management deployment levels of the considered countries.
Resumo:
The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtual ness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.
Resumo:
The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtualness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.
Resumo:
This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of Project Management Deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the Project Management literature in particular during the last few years. The globalization of businesses and worldwide Governmental / International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM Deployment. 74 countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM Deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the Project Management Deployment levels of the considered countries.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
This report maps the current state of entrepreneurship in Australia using data from the Global Entrepreneurship Monitor (GEM) for the year 2011. Entrepreneurship is regarded as a crucial driver for economic well-being. Entrepreneurial activity in new and established firms drives innovation and creates jobs. Entrepreneurs also fuel competition thereby contributing indirectly to market and productivity growth along with improving competitiveness of the national economy. Given the economic landscape that exists as a result of the global financial crisis (GFC), it is probably more important than ever for us to understand the effects and drivers of entrepreneurial activity and attitudes in Australia. The central finding of this report is that entrepreneurship is certainly alive and well in Australia. With 10.5 per cent of the adult population involved in setting up a new business or owning a newly founded business as measured by the total entrepreneurial activity rate (TEA) in 2011, Australia ranks second only to the United States among the innovation-driven (developed) economies. Compared with 2010 the TEA rate has increased by 2.7 percentage points. Furthermore, in regard to employee entrepreneurial activity (EEA) rate in established firms, Australia ranks above average. According to GEM data, 5 per cent of the adult population is engaged in developing or launching new products, a new business unit or subsidiary for their employer. Further analysis of the GEM data also clearly shows that Australia compares well with other major economies in terms of the ‘quality’ of entrepreneurial activities being pursued. Indeed, it is not only the quantity of entrepreneurs but also the level of their aspirations and business goals that are important drivers for economic growth. On average, for each business started in Australia driven by the lack of alternatives for the founder to generate income from any other source, there are five other businesses started where the founders specifically want to take advantage of a business opportunity that they believe will increase their personal income or independence. With respect to innovativeness, 31 per cent of Australian new businesses offer products or services which they consider to be new to customers or where very few, or in some cases no, other businesses offer the same product or service. Both these indicators are higher than the average for innovation-driven economies. Somewhat below average is the international orientation of Australian entrepreneurs whereby only 12 per cent aim at having a substantial share of customers from international markets. So what drives this high quantity and quality of entrepreneurship in Australia? The analysis of the data suggests it is a combination of both business opportunities and entrepreneurial skills. It seems that around 50 per cent of the Australian population identify opportunities for a start-up venture and believe that they have the necessary skills to start a business. Furthermore, a large majority of the Australian population report that high media attention for entrepreneurship provides successful role models for prospective entrepreneurs. As a result, 12 per cent of our respondents have expressed the intention to start a business within the next three years. These numbers are all well above average when compared to the other major economies. With regard to gender, the GEM survey shows a high proportion of female entrepreneurs. Approximately 8.4 per cent of adult females are actually involved in setting up a business or have recently done so. Although this female TEA rate is slightly down from 2010, Australia ranks second among the innovation-driven economies. This paints a healthy picture of access to entrepreneurial opportunities for Australian women.