679 resultados para develop


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background Recent studies show that advanced paternal age (APA) is associated with an increased risk of neurodevelopmental disorders such as autism, bipolar disorder and schizophrenia. A body of evidence also suggests that individuals who develop schizophrenia show subtle deviations in a range of behavioural domains during their childhood. The aim of the study was to examine the relationship between paternal and maternal ages and selected behavioural measures in children using a large birth cohort. Method Participants were singleton children (n = 21,753) drawn from the US Collaborative Perinatal Project. The outcome measures were assessed at 7 years. The main analyses examined the relationship between parental age and behavioural measures when adjusted for a range of potentially confounding variables, including age of the other parent, maternal race, socio-economic measures, sex, gestation length, maternal marital status, parental mental illness, and child's age-at-testing. Results Advanced paternal age was associated with a significantly increased risk of adverse ‘externalizing’ behaviours at age seven years. For every five year increase in paternal age, the odds of higher ‘externalizing’ behaviours was increased by 12% (OR = 1.12; 95% CI = 1.03, 1.21, p < 0.0001). The relationship persisted after adjusting for potential confounding factors. ‘Internalizing’ behavioural outcome was not associated with advanced paternal age. In contrast, advanced maternal age was significantly protective against adverse ‘externalizing’ behavioural outcomes, but associated with an increased risk of adverse ‘internalizing’ behavioural outcomes. Discussion The offspring of older fathers show a distinctly different pattern of behaviours compared to the offspring of older mothers. The diverse socio-cultural and biologically-mediated factors that underpin these findings remain to be clarified. In light of secular trends related to delayed parenthood, the mechanisms underlying these findings warrant closer scrutiny.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fatigue in the postnatal period is such a common experience for most mothers that the term ‘postpartum fatigue’ (PPF) has been coined to describe it. When new mothers experience extreme fatigue, it follows that their physical health, mental health, and social-wellbeing is negatively affected. It is interesting to note that there is a distinct lack of empirical investigations focusing on the link between PPF and increased risk of injury; particularly when the links between fatigue and increased risk of road crashes are well documented. The purpose of this investigation was to undertake pilot research to develop an understanding of the duration of PPF and the performance impairments experienced by new mothers when involved in safety-sensitive activities, such as driving a motor vehicle. Semi-structured interviews were undertaken with women (N = 24) at 12 weeks postpartum living in South-east Queensland, Australia. Key themes were identified; with a particular emphasis towards understanding the link between the participant’s experience of postpartum fatigue and the impact this has on their overall cognitive and physiological functioning, as well as their experience of the driving task. Further, sleep/wake data was collected and using the Karolinska Sleepiness Scale (KSS) the potential crash risk for this group of mothers is discussed. It is proposed that the findings of this investigation could be used to improve current knowledge among new mothers and practitioners regarding the mechanisms and consequences of fatigue and to inform interventions that lead to a decreased risk of injury associated with postpartum fatigue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperative collision warning system for road vehicles, enabled by recent advances in positioning systems and wireless communication technologies, can potentially reduce traffic accident significantly. To improve the system, we propose a graph model to represent interactions between multiple road vehicles in a specific region and at a specific time. Given a list of vehicles in vicinity, we can generate the interaction graph using several rules that consider vehicle's properties such as position, speed, heading, etc. Safety applications can use the model to improve emergency warning accuracy and optimize wireless channel usage. The model allows us to develop some congestion control strategies for an efficient multi-hop broadcast protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An informed citizenry is essential to the effective functioning of democracy. In most modern liberal democracies, citizens have traditionally looked to the media as the primary source of information about socio-political matters. In our increasingly mediated world, it is critical that audiences be able to effectively and accurately use the media to meet their information needs. Media literacy, the ability to access, understand, evaluate and create media content is therefore a vital skill for a healthy democracy. The past three decades have seen the rapid expansion of the information environment, particularly through Internet technologies. It is obvious that media usage patterns have changed dramatically as a result. Blogs and websites are now popular sources of news and information, and are for some sections of the population likely to be the first, and possibly only, information source accessed when information is required. What are the implications for media literacy in such a diverse and changing information environment? The Alexandria Manifesto stresses the link between libraries, a well informed citizenry and effective governance, so how do these changes impact on libraries? This paper considers the role libraries can play in developing media literate communities, and explores the ways in which traditional media literacy training may be expanded to better equip citizens for new media technologies. Drawing on original empirical research, this paper highlights a key shortcoming of existing media literacy approaches: that of overlooking the importance of needs identification as an initial step in media selection. Self-awareness of one’s actual information need is not automatic, as can be witnessed daily at reference desks in libraries the world over. Citizens very often do not know what it is that they need when it comes to information. Without this knowledge, selecting the most appropriate information source from the vast range available becomes an uncertain, possibly even random, enterprise. Incorporating reference interview-type training into media literacy education, whereby the individual will develop the skills to interrogate themselves regarding their underlying information needs, will enhance media literacy approaches. This increased focus on the needs of the individual will also push media literacy education into a more constructivist methodology. The paper also stresses the importance of media literacy training for adults. Media literacy education received in school or even university cannot be expected to retain its relevance over time in our rapidly evolving information environment. Further, constructivist teaching approaches highlight the importance of context to the learning process, thus it may be more effective to offer media literacy education relating to news media use to adults, whilst school-based approaches focus on types of media more relevant to young people, such as entertainment media. Librarians are ideally placed to offer such community-based media literacy education for adults. They already understand, through their training and practice of the reference interview, how to identify underlying information needs. Further, libraries are placed within community contexts, where the everyday practice of media literacy occurs. The Alexandria Manifesto stresses the link between libraries, a well informed citizenry and effective governance. It is clear that libraries have a role to play in fostering media literacy within their communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper seeks to continue the debate about the need for professionals in the library and information services (LIS) sector to continually engage in career-long learning to sustain and develop their knowledge and skills in a dynamic industry. Aims: The neXus2 workforce study has been funded by the ALIA and the consortium of National and State Libraries Australasia (NSLA). It builds on earlier research work (the neXus census) that looked at the demographic, educational and career perspectives of individual library and information professions, to critically examine institutional policies and practices associated with the LIS workforce. The research aims to develop a clearer understanding of the issues impacting on workforce sustainability, workforce capability and workforce optimisation. Methods: The research methodology involved an extensive online survey conducted in March 2008 which collected data on organisational and general staffing; recruitment and retention; staff development and continuing professional education; and succession planning. Encouragement to participate was provided by key industry groups, including academic, public, health, law and government library and information agencies, with the result that around 150 institutions completed the questionnaire. Results: The paper will specifically discuss the research findings relating to training and professional development, to measure the scope and distribution of training activities across the workforce, to consider the interrelationship between the strategic and operational dimensions of staff development in individual institutions and to analyse the common and distinctive factors evident in the different sectors of the profession. Conclusion: The neXus2 project has successfully engaged LIS institutions in the collection of complex industry data that is relevant to the future education and workforce strategies for all areas of the profession. Cross-sector forums such as Information Online 2009 offer the opportunity for stimulating professional dialogue on the key issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite changes in surgical techniques, radiotherapy targeting and the apparent earlier detection of cancers, secondary lymphoedema is still a significant problem for about 20–30% of those who receive treatment for cancer, although the incidence and prevalence does seem to be falling. The figures above generally relate to detection of an enlarged limb or other area, but it seems that about 60% of all patients also suffer other problems with how the limb feels, what can or cannot be done with it and a range of social or psychological issues. Often these ‘subjective’ changes occur before the objective ones, such as a change in arm volume or circumference. For most of those treated for cancer lymphoedema does not develop immediately, and, while about 60–70% develop it in the first few years, some do not develop lymphoedema for up to 15 or 20 years. Those who will develop clinically manifest lymphoedema in the future are, for some time, in a latent or hidden phase of lymphoedema. There also seems to be some risk factors which are indicators for a higher likelihood of lymphoedema post treatment, including oedema at the surgical site, arm dominance, age, skin conditions, and body mass index (BMI).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The over represented number of novice drivers involved in crashes is alarming. Driver training is one of the interventions aimed at mitigating the number of crashes that involve young drivers. Experienced drivers have better hazard perception ability compared to inexperienced drivers. Eye gaze patterns have been found to be an indicator of the driver's competency level. The aim of this paper is to develop an in-vehicle system which correlates information about the driver's gaze and vehicle dynamics, which is then used to assist driver trainers in assessing driving competency. This system allows visualization of the complete driving manoeuvre data on interactive maps. It uses an eye tracker and perspective projection algorithms to compute the depth of gaze and plots it on Google maps. This interactive map also features the trajectory of the vehicle and turn indicator usage. This system allows efficient and user friendly analysis of the driving task. It can be used by driver trainers and trainees to understand objectively the risks encountered during driving manoeuvres. This paper presents a prototype that plots the driver's eye gaze depth and direction on an interactive map along with the vehicle dynamics information. This prototype will be used in future to study the difference in gaze patterns in novice and experienced drivers prior to a certain manoeuvre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the current status of a program to develop an automated forced landing system for a fixed-wing Unmanned Aerial Vehicle (UAV). This automated system seeks to emulate human pilot thought processes when planning for and conducting an engine-off emergency landing. Firstly, a path planning algorithm that extends Dubins curves to 3D space is presented. This planning element is then combined with a nonlinear guidance and control logic, and simulated test results demonstrate the robustness of this approach to strong winds during a glided descent. The average path deviation errors incurred are comparable to or even better than that of manned, powered aircraft. Secondly, a study into suitable multi-criteria decision making approaches and the problems that confront the decision-maker is presented. From this study, it is believed that decision processes that utilize human expert knowledge and fuzzy logic reasoning are most suited to the problem at hand, and further investigations will be conducted to identify the particular technique/s to be implemented in simulations and field tests. The automated UAV forced landing approach presented in this paper is promising, and will allow the progression of this technology from the development and simulation stages through to a prototype system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Building integrated living systems (BILS), such as green roofs and living walls, could mitigate many of the challenges presented by climate change and biodiversity protection. However, few if any such systems have been constructed, and current tools for evaluating them are limited, especially under Australian subtropical conditions. BILS are difficult to assess, because living systems interact with complex, changing and site-specific social and environmental conditions. Our past research in design for eco-services has confirmed the need for better means of assessing the ecological values of BILS - let alone better models for assessing their thermal and hydrological performance. To address this problem, a research project is being developed jointly by researchers at the Central Queensland University (CQ University) and the Queensland University of Technology (QUT), along with industry collaborators. A mathematical model under development at CQ University will be applied and tested to determine its potential for predicting their complex, dynamic behaviour in different contexts. However, the paper focuses on the work at QUT. The QUT school of design is generating designs for living walls and roofs that provide a range of ecosystem goods and services, or ‘eco-services’, for a variety of micro-climates and functional contexts. The research at QUT aims to develop appropriate designs, virtual prototypes and quantitative methods for assessing the potential multiple benefits of BILS in subtropical climates. It is anticipated that the CQ University model for predicting thermal behaviour of living systems will provide a platform for the integration of ecological criteria and indicators. QUT will also explore means to predict and measure the value of eco-services provided by the systems, which is still largely uncharted territory. This research is ultimately intended to facilitate the eco-retrofitting of cities to increase natural capital and urban resource security - an essential component of sustainability. The talk will present the latest range of multifunctional, eco-productive living walls, roofs and urban space frames and their eco-services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Innovation Management (IM) in most knowledge based firms is used on an adhoc basis where senior managers use this term to leverage competitive edge without understanding its true meaning and how its robust application in organisation impacts organisational performance. There have been attempts in the manufacturing industry to harness the innovative potential of the business and apprehend its use as a point of difference to improve financial and non financial outcomes. However further work is required to innovatively extrapolate the lessons learnt to introduce incremental and/or radical innovation to knowledge based firms. An international structural engineering firm has been proactive in exploring and implementing this idea and has forged an alliance with the Queensland University of Technology to start the Innovation Management Program (IMP). The aim was to develop a permanent and sustainable program with which innovation can be woven through the fabric of the organisation. There was an intention to reinforce the firms’ vision and reinvigorate ideas and create new options that help in its realisation. This paper outlines the need for innovation in knowledge based firms and how this consulting engineering firm reacted to this exigency. The development of the Innovation Management Program, its different themes (and associated projects) and how they integrate to form a holistic model is also discussed. The model is designed around the need of providing professional qualification improvement opportunities for staff, setting-up organised, structured & easily accessible knowledge repositories to capture tacit and explicit knowledge and implement efficient project management strategies with a view to enhance client satisfaction. A Delphi type workshop is used to confirm the themes and projects. Some of the individual projects and their expected outcomes are also discussed. A questionnaire and interviews were used to collect data to select appropriate candidates responsible for leading these projects. Following an in-depth analysis of preliminary research results, some recommendations on the selection process will also be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the ways in which university departments and faculties can enhance the quality of learning and assessment is to develop a ‘well thought out criterion‐referenced assessment system’ (Biggs, 2003, p. 271). In designing undergraduate degrees (courses) this entails making decisions about the levelling of expectations across different years through devising objectives and their corresponding criteria and standards: a process of alignment analogous to what happens in unit (subject) design. These decisions about levelling have important repercussions in terms of supporting students’ work‐related learning, especially in relation to their ability to cope with the increasing cognitive and skill demands made on them as they progress through their studies. They also affect the accountability of teacher judgments of students’ responses to assessment tasks, achievement of unit objectives and, ultimately, whether students are awarded their degrees and are sufficiently prepared for the world of work. Research reveals that this decision‐making process is rarely underpinned by an explicit educational rationale (Morgan et al, 2002). The decision to implement criterion referenced assessment in an undergraduate microbiology degree was the impetus for developing such a rationale because of the implications for alignment, and therefore ‘levelling’ of expectations across different years of the degree. This paper provides supporting evidence for a multi‐pronged approach to levelling, through backward mapping of two revised units (foundation and exit year). This approach adheres to the principles of alignment while combining a work‐related approach (via industry input) with the blended disciplinary and learner‐centred approaches proposed by Morgan et al. (2002). It is suggested that this multi‐pronged approach has the potential for making expectations, especially work‐related ones across different year levels of degrees, more explicit to students and future employers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A continuing challenge for pre-service teacher education is the learning transfer between the university based components and the practical school based components of their training. It is not clear how easily pre-service teachers can transfer university learnings into ‘in school’ practice. Similarly, it is not clear how easily knowledge learned in the school context can be disembedded from this particular context and understood more generally by the pre-service teacher. This paper examines the effect of a community of practice formed specifically to explore learning transfer via collaboration and professional enquiry, in ‘real time’, across the globe. “Activity Theory” (Engestrom, 1999) provided the theoretical framework through which the cognitive, physical and social processes involved could be understood. For the study, three activity systems formed community of practice network. The first activity system involved pre-service teachers at a large university in Queensland, Australia. The second activity system was introduced by the pre-service teachers and involved Year 12 students and teachers at a private secondary school also in Queensland, Australia. The third activity system involved university staff engineers at a large university in Pennsylvania, USA. The common object among the three activity systems was to explore the principles and applications of nanotechnology. The participants in the two Queensland activity systems, controlled laboratory equipment (a high powered Atomic Force Microscope – CPII) in Pennsylvania, USA, with the aim of investigating surface topography and the properties of nano particles. The pre-service teachers were to develop their remote ‘real time’ experience into school classroom tasks, implement these tasks, and later report their findings to other pre-service teachers in the university activity system. As an extension to the project, the pre-service teachers were invited to co-author papers relating to the project. Data were collected from (a) reflective journals; (b) participant field notes – a pre-service teacher initiative; (c) surveys – a pre-service teacher initiative; (d) lesson reflections and digital recordings – a pre-service teacher initiative; and (e) interviews with participants. The findings are reported in terms of the major themes: boundary crossing, the philosophy of teaching, and professional relationships The findings have implications for teacher education. The researchers feel that deliberate planning for networking between activity systems may well be a solution to the apparent theory/practice gap. Proximity of activity systems need not be a hindering issue.