410 resultados para others
Resumo:
Nuclear Factor Y (NF-Y) is a trimeric complex that binds to the CCAAT box, a ubiquitous eukaryotic promoter element. The three subunits NF-YA, NF-YB and NF-YC are represented by single genes in yeast and mammals. However, in model plant species (Arabidopsis and rice) multiple genes encode each subunit providing the impetus for the investigation of the NF-Y transcription factor family in wheat. A total of 37 NF-Y and Dr1 genes (10 NF-YA, 11 NF-YB, 14 NF-YC and 2 Dr1) in Triticum aestivum were identified in the global DNA databases by computational analysis in this study. Each of the wheat NF-Y subunit families could be further divided into 4-5 clades based on their conserved core region sequences. Several conserved motifs outside of the NF-Y core regions were also identified by comparison of NF-Y members from wheat, rice and Arabidopsis. Quantitative RT-PCR analysis revealed that some of the wheat NF-Y genes were expressed ubiquitously, while others were expressed in an organ-specific manner. In particular, each TaNF-Y subunit family had members that were expressed predominantly in the endosperm. The expression of nine NF-Y and two Dr1 genes in wheat leaves appeared to be responsive to drought stress. Three of these genes were up-regulated under drought conditions, indicating that these members of the NF-Y and Dr1 families are potentially involved in plant drought adaptation. The combined expression and phylogenetic analyses revealed that members within the same phylogenetic clade generally shared a similar expression profile. Organ-specific expression and differential response to drought indicate a plant-specific biological role for various members of this transcription factor family.
Resumo:
The process of becoming numerate begins in the early years. According to Vygotskian theory (1978), teachers are More Knowledgeable Others who provide and support learning experiences that influence children’s mathematical learning. This paper reports on research that investigates three early childhood teachers mathematics content knowledge. An exploratory, single case study utilised data collected from interviews, and email correspondence to investigate the teachers’ mathematics content knowledge. The data was reviewed according to three analytical strategies: content analysis, pattern matching, and comparative analysis. Findings indicated there was variation in teachers’ content knowledge across the five mathematical strands and that teachers might not demonstrate the depth of content knowledge that is expected of four year specially trained early years’ teachers. A significant factor that appeared to influence these teachers’ content knowledge was their teaching experience. Therefore, an avenue for future research is the investigation of factors that influence teachers’ content numeracy knowledge.
Resumo:
The term literacy remains highly contested and debates continue about how literacy might best be researched and to what ends. For some, literacy is simply a matter of acquiring the technical competence which enables people to read and write. Literacy research conducted from this point of view does not usually concern itself with the new media but rather focuses on how people learn to code and decode print text. For others, however, literacy is more complex and involves learning a repertoire of practices for communicating and getting things done in particular social and cultural contexts. Literacy research conducted from this sociocultural point of view accepts that the new media are central to the field because in everyday cultural practice people are using the new media to make meaning, to express themselves and to communicate and work with others. Socio-cultural approaches to literacy research have already provided rich material which has assisted educators to understand literacy practices in everyday use (e.g. Barton & Hamilton, 1998; Barton, Hamilton and Ivanic, 2000) including children’s appropriation of the media in school-based writing (Dyson, 1997). However, the changing semiotic and cultural practices associated with new media and online participation have less frequently been the object of study...
Resumo:
Magneto-rheological (MR) fluid damper is a semi-active control device that has recently received more attention by the vibration control community. But inherent nonlinear hysteresis character of magneto-rheological fluid dampers is one of the challenging aspects for utilizing this device to achieve high system performance. So the development of accurate model is necessary to take the advantage their unique characteristics. Research by others [3] has shown that a system of nonlinear differential equations can successfully be used to describe the hysteresis behavior of the MR damper. The focus of this paper is to develop an alternative method for modeling a damper in the form of centre average fuzzy interference system, where back propagation learning rules are used to adjust the weight of network. The inputs for the model are used from the experimental data. The resulting fuzzy interference system is satisfactorily represents the behavior of the MR fluid damper with reduced computational requirements. Use of the neuro-fuzzy model increases the feasibility of real time simulation.
Resumo:
Bullying and victimisation among school age children is recognised as a major public health problem. The Australian Covert Bullying Prevalence Study (ACBPS) reports that just over one quarter (27%) of school students aged 8 to 14 years were bullied and 9% bullied others on a frequent basis (every few weeks or more often) (Cross et al., 2009). Bullying is associated with a host of detrimental effects, including loneliness (Nansel, Overpeck, Pilla, & Ruan, 2001), low self‐esteem (Jankauskiene, Kardelis, Sukys, & Kardeliene, 2008; Salmivalli, Kaukiainen, Kaistaniemi, & Lagerspetz, 1999), anxiety, depression (Kaltiala‐Heino, Rimpela, Rantanen, & Rimpela, 2000), suicide ideation (Kaltiala‐Heino, Rimpela, Marttunen, Rimpela, & Rantanen, 1999), impaired academic achievement (Nansel et al., 2001), and poorer physical health (Wolke, Woods, Bloomfield, & Karstadt, 2001).
Resumo:
Stigmergy is a biological term used when discussing insect or swarm behaviour, and describes a model supporting environmental communication separately from artefacts or agents. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, or similarly termites and their termite mound building process. What is interesting with this mechanism is that highly organized societies are achieved with a lack of any apparent management structure. Stigmergic behavior is implicit in the Web where the volume of users provides a self-organizing and self-contextualization of content in sites which facilitate collaboration. However, the majority of content is generated by a minority of the Web participants. A significant contribution from this research would be to create a model of Web stigmergy, identifying virtual pheromones and their importance in the collaborative process. This paper explores how exploiting stigmergy has the potential of providing a valuable mechanism for identifying and analyzing online user behavior recording actionable knowledge otherwise lost in the existing web interaction dynamics. Ultimately this might assist our building better collaborative Web sites.
Resumo:
As computer applications become more available—both technically and economically—construction project managers are increasingly able to access advanced computer tools capable of transforming the role that project managers have typically performed. Competence at using these tools requires a dual commitment in training—from the individual and the firm. Improving the computer skills of project managers can provide construction firms with a competitive advantage to differentiate from others in an increasingly competitive international market. Yet, few published studies have quantified what existing level of competence construction project managers have. Identification of project managers’ existing computer application skills is a necessary first step to developing more directed training to better capture the benefits of computer applications. This paper discusses the yet to be released results of a series of surveys undertaken in Malaysia, Singapore, Indonesia, Australia and the United States through QUT’s School of Construction Management and Property and the M.E. Rinker, Sr. School of Building Construction at the University of Florida. This international survey reviews the use and reported competence in using a series of commercially-available computer applications by construction project managers. The five different country locations of the survey allow cross-national comparisons to be made between project managers undertaking continuing professional development programs. The results highlight a shortfall in the ability of construction project managers to capture potential benefits provided by advanced computer applications and provide directions for targeted industry training programs. This international survey also provides a unique insight to the cross-national usage of advanced computer applications and forms an important step in this ongoing joint review of technology and the construction project manager.
Resumo:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
Resumo:
To determine the effects of the articular cartilage surface, as well as synovial fluid (SF) and its components, specifically proteoglycan 4 (PRG4) and hyaluronic acid (HA), on integrative cartilage repair in vitro. Methods. Blocks of calf articular cartilage were harvested, some with the articular surface intact and others without. Some of the latter types of blocks were pretreated with trypsin, and then with bovine serum albumin, SF, PRG4, or HA. Immunolocalization of PRG4 on cartilage surfaces was performed after treatment. Pairs of similarly treated cartilage blocks were incubated in partial apposition for 2 weeks in medium supplemented with serum and 3 H-proline. Following culture, mechanical integration between apposed cartilage blocks was assessed by measuring adhesive strength, and protein biosynthesis and deposition were determined by incorporated 3 H-proline. Results. Samples with articular surfaces in apposition exhibited little integrative repair compared with samples with cut surfaces in apposition. PRG4 was immunolocalized at the articular cartilage surface, but not in deeper, cut surfaces (without treatment). Cartilage samples treated with trypsin and then with SF or PRG4 exhibited an inhibition of integrative repair and positive immunostaining for PRG4 at treated surfaces compared with normal cut cartilage samples, while samples treated with HA exhibited neither inhibited integrative repair nor PRG4 at the tissue surfaces. Deposition of newly synthesized protein was relatively similar under conditions in which integration differed significantly. Conclusion. These results support the concept that PRG4 in SF, which normally contributes to cartilage lubrication, can inhibit integrative cartilage repair. This has the desirable effect of preventing fusion of apposing surfaces of articulating cartilage, but has the undesirable effect of inhibiting integrative repair.
Resumo:
One of the prominent topics in Business Service Management is business models for (new) services. Business models are useful for service management and engineering as they provide a broader and more holistic perspective on services. Business models are particularly relevant for service innovation as this requires paying attention to the business models that make new services viable and business model innovation can drive the innovation of new and established services. Before we can have a look at business models for services, we first need to understand what business models are. This is not straight-forward as business models are still not well comprehended and the knowledge about business models is fragmented over different disciplines, such as information systems, strategy, innovation, and entrepreneurship. This whitepaper, ‘Understanding business models,’ introduces readers to business models. This whitepaper contributes to enhancing the understanding of business models, in particular the conceptualisation of business models by discussing and integrating business model definitions, frameworks and archetypes from different disciplines. After reading this whitepaper, the reader will have a well-developed understanding about what business models are and how the concept is sometimes interpreted and used in different ways. It will help the reader in assessing their own understanding of business models and that and of others. This will contribute to a better and more beneficial use of business models, an increase in shared understanding, and making it easier to work with business model techniques and tools.
Resumo:
Copyright protects much of the creative, cultural, educational, scientific and informational material generated by federal, State/Territory and local governments and their constituent departments and agencies. Governments at all levels develop, manage and distribute a vast array of materials in the form of documents, reports, websites, datasets and databases on CD or DVD and files that can be downloaded from a website. Under the Copyright Act 1968 (Cth), with few exceptions government copyright is treated the same as copyright owned by non-government parties insofar as the range of protected materials and the exclusive proprietary rights attaching to them are concerned. However, the rationale for recognizing copyright in public sector materials and vesting ownership of copyright in governments is fundamentally different to the main rationales underpinning copyright generally. The central justification for recognizing Crown copyright is to ensure that government documents and materials created for public administrative purposes are disseminated in an accurate and reliable form. Consequently, the exclusive rights held by governments as copyright owners must be exercised in a manner consistent with the rationale for conferring copyright ownership on them. Since Crown copyright exists primarily to ensure that documents and materials produced for use in the conduct of government are circulated in an accurate and reliable form, governments should exercise their exclusive rights to ensure that their copyright materials are made available for access and reuse, in accordance with any laws and policies relating to access to public sector materials. While copyright law vests copyright owners with extensive bundles of exclusive rights which can be exercised to prevent others making use of the copyright material, in the case of Crown copyright materials these rights should rarely be asserted by government to deviate from the general rule that Crown copyright materials will be available for “full and free reproduction” by the community at large.
Resumo:
Since the mid-1990s, government policies in the USA, Canada, England, and Australia have promoted the need to produce an ICT skilled workforce in order to ensure national competitiveness in globalised economic conditions. In this article, we examine the ways in which these policy intentions in 1 state in Australia were translated into a techno-determinist and technocentric plan which focused primarily on getting wired up and connected. We summarise the findings from 2 projects: an investigation of a state-wide principals' professional development programme and an action research study investigating literacy, educational disadvantage, and information technologies. We found significant differences in the distribution of the physical and human capabilities between schools which made the task of engaging with ICT harder for some than others. Nevertheless, we suggest that some school leaders did develop innovative practice. We suggest that policy deficits made it difficult for school leaders to grapple with the dimensions of and debates about the kinds of educational changes that schools and school systems should be making. © 2006 Taylor & Francis.
Resumo:
If copyright law does not liberate us from restrictions on the dissemination of knowledge, if it does not encourage expressive freedom, what is its purpose? This volume offers the thinking and suggestions of some of the finest minds grappling with the future of copyright regulation. The Copyright Future Copyright Freedom conference held in 2009 at Old Parliament House Canberra brought together Lawrence Lessig, Julie Cohen, Leslie Zines, Adrian Sterling, Sam Ricketson, Graham Greenleaf, Anne Fitzgerald, Susy Frankel, John Gilchrist, Michael Kirby and others to share the rich fruits of their experience and analysis. Zines, Sterling and Gilchrist outline their roles in the genesis and early growth of Australian copyright legislation, enriching the knowledge of anyone asking urgent questions about the future of information regulation.
Resumo:
Objective: Given the increasing popularity of motorcycle riding and heightened risk of injury or death associated with being a rider, this study explored rider behaviour as a determinant of rider safety and, in particular, key beliefs and motivations which influence such behaviour. To enhance the effectiveness of future education and training interventions, it is important to understand riders’ own views about what influences how they ride. Specifically, this study sought to identify key determinants of riders’ behaviour in relation to the social context of riding including social and identity-related influences relating to the group (group norms and group identity) as well as the self (moral/personal norm and self-identity). ----- ----- Method: Qualitative research was undertaken via group discussions with motorcycle riders (n = 41). Results: The findings revealed that those in the group with which one rides represent an important source of social influence. Also, the motorcyclist (group) identity was associated with a range of beliefs, expectations, and behaviours considered to be normative. Exploration of the construct of personal norm revealed that riders were most cognizant of the “wrong things to do” when riding; among those issues raised was the importance of protective clothing (albeit for the protection of others and, in particular, pillion passengers). Finally, self-identity as a motorcyclist appeared to be important to a rider’s self-concept and was likely to influence their on-road behaviour. ----- ----- Conclusion: Overall, the insight provided by the current study may facilitate the development of interventions including rider training as well as public education and mass media messages. The findings suggest that these interventions should incorporate factors associated with the social nature of riding in order to best align it with some of the key beliefs and motivations underpinning riders’ on-road behaviours.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.