937 resultados para Separate Continuity
Resumo:
This study aimed to identify how school leaders’ practices influence department activities during school transformation. The method used to explore emerging disturbances and contradictions within and between school departments was based on Cultural Historical Activity Theory (CHAT). The findings show that in order to implement educational changes in schools successfully, leaders should promote the change they envision as being highly consistent with the current collective identity (shared object) of the departments. From this perspective, the systemic components of the school departments are given a sense of preservation and continuity, rather than loss.
Resumo:
PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.
Resumo:
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex.
Resumo:
Decision makers frequently use separate participatory activities to involve marginalised groups. This approach can generate valuable insights, but it has limitations. We discuss the benefits and limits through two examples involving young people, and outline how the approach can be modified, thereby building citizens who are responsive to other perspectives.
Resumo:
Portable water-filled road barriers (PWFB) are roadside structures placed on temporary construction zones to separate work site from moving traffic. Recent changes in governing standards require PWFB to adhere to strict compliance in terms of lateral displacement of the road barriers and vehicle redirectionality. Actual road safety barrier test can be very costly, thus researchers resort to Finite Element Analysis (FEA) in the initial designs phase prior to real vehicle test. There has been many research conducted on concrete barriers and flexible steel barriers using FEA, however not many is done pertaining to PWFB. This research probes a new method to model joint mechanism in PWFB. Two methods to model the joining mechanism are presented and discussed in relation to its practicality and accuracy to real work applications. Moreover, the study of the physical gap and mass of the barrier was investigated. Outcome from this research will benefit PWFB research and allow road barrier designers better knowledge in developing the next generation of road safety structures.
Resumo:
In this research, we suggest appropriate information technology (IT) governance structures to manage the cloud computing resources. The interest in acquiring IT resources a utility is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to ensure its effective management and fit into existing business processes to leverage the promised opportunities. Using a mixed method design, we identified four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. These governance structures ensure appropriate direction of cloud computing resources from its acquisition to fit into the organizations business processes.
Resumo:
This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.
Resumo:
This paper presents a novel and practical procedure for estimating the mean deck height to assist in automatic landing operations of a Rotorcraft Unmanned Aerial Vehicle (RUAV) in harsh sea environments. A modified Prony Analysis (PA) procedure is outlined to deal with real-time observations of deck displacement, which involves developing an appropriate dynamic model to approach real deck motion with parameters identified through implementing the Forgetting Factor Recursive Least Square (FFRLS) method. The model order is specified using a proper order-selection criterion based on minimizing the summation of accumulated estimation errors. In addition, a feasible threshold criterion is proposed to separate the dominant components of deck displacement, which results in an accurate instantaneous estimation of the mean deck position. Simulation results demonstrate that the proposed recursive procedure exhibits satisfactory estimation performance when applied to real-time deck displacement measurements, making it well suited for integration into ship-RUAV approach and landing guidance systems.
Resumo:
Parallel interleaved converters are finding more applications everyday, for example they are frequently used for VRMs on PC main boards mainly to obtain better transient response. Parallel interleaved converters can have their inductances uncoupled, directly coupled or inversely coupled, all of which have different applications with associated advantages and disadvantages. Coupled systems offer more control over converter features, such as ripple currents, inductance volume and transient response. To be able to gain an intuitive understanding of which type of parallel interleaved converter, what amount of coupling, what number of levels and how much inductance should be used for different applications a simple equivalent model is needed. As all phases of an interleaved converter are supposed to be identical, the equivalent model is nothing more than a separate inductance which is common to all phases. Without utilising this simplification the design of a coupled system is quite daunting. Being able to design a coupled system involves solving and understanding the RMS currents of the input, individual phase (or cell) and output. A procedure using this equivalent model and a small amount of modulo arithmetic is detailed.
Resumo:
The key to reducing cost of electric vehicles is integration. All too often systems such as the motor, motor controller, batteries and vehicle chassis/body are considered as separate problems. The truth is that a lot of trade-offs can be made between these systems, causing an overall improvement in many areas including total cost. Motor controller and battery cost have a relatively simple relationship; the less energy lost in the motor controller the less energy that has to be carried in the batteries, hence the lower the battery cost. A motor controller’s cost is primarily influenced by the cost of the switches. This paper will therefore present a method of assessing the optimal switch selection on the premise that the optimal switch is the one that produces the lowest system cost, where system cost is the cost of batteries + switches.
Resumo:
The construction industry has an obligation to respond to sustainability expectations of our society. Solutions that integrate innovative, intelligent and sustainability deliverables are vital for us to meet new and emerging challenges. Industrialised Building Systems (IBS), or known otherwise as prefabrication, employs a combination of ready-made components in the construction of buildings. They promote quality of production, enhance simplification of construction processes and minimise waste. The unique characteristics of this construction method respond well to sustainability. Despite the promises however, IBS has yet to be effectively implemented in Malaysia. There are often misconceptions among key stakeholders about IBS applications. The existing rating schemes fail to assess IBS against sustainability measures. To ensure the capture of full sustainability potential in buildings developed, the critical factors and action plans agreeable to all participants in the development processes need to be identified. Through questionnaire survey, eighteen critical factors relevant to IBS sustainability were identified and encapsulated into a conceptual framework to coordinate a systematic IBS decision making approach. Five categories were used to separate the critical factors into: ecological performance; economic value; social equity and culture; technical quality; and implementation and enforcement. This categorisation extends the "Triple Bottom Lines" to include social, economic, environmental and institutional dimensions. Semi-structured interviews help identify strategies of actions and solutions of potential problems through a SWOT analysis framework. These tools help the decision-makers maximise the opportunities by using available strengths, avoid weaknesses, and diagnose possible threats in the examined issues. The recommendations formed an integrated action plan to present information on what and how to improve sustainability through tackling each critical factor during IBS development. It can be used as part of the project briefing documents for IBS designers. For validation and finalisation the research deliverables, three case studies were conducted. The research fills a current gap by responding to IBS project scenarios in developing countries. It also provides a balanced view for designers to better understand sustainability potential and prioritize attentions to manage sustainability issues in IBS applications.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Introduction Road safety researchers rely heavily on self-report data to explore the aetiology of crash risk. However, researchers consistently acknowledge a range of limitations associated with this methodological approach (e.g., self-report bias), which has been hypothesised to reduce the predictive efficacy of scales. Although well researched in other areas, one important factor often neglected in road safety studies is the fallibility of human memory. Given accurate recall is a key assumption in many studies, the validity and consistency of self-report data warrants investigation. The aim of the current study was to examine the consistency of self-report data of crash history and details of the most recent reported crash on two separate occasions. Materials & Method A repeated measures design was utilised to examine the self-reported crash involvement history of 214 general motorists over a two month period. Results A number of interesting discrepancies were noted in relation to number of lifetime crashes reported by the participants and the descriptions of their most recent crash across the two occasions. Of the 214 participants who reported having been involved in a crash, 35 (22.3%) reported a lower number of lifetime crashes as Time 2, than at Time 1. Of the 88 drivers who reported no change in number of lifetime crashes, 10 (11.4%) described a different most recent crash. Additionally, of the 34 reporting an increase in the number of lifetime crashes, 29 (85.3%) of these described the same crash on both occasions. Assessed as a whole, at least 47.1% of participants made a confirmed mistake at Time 1 or Time 2. Conclusions These results raise some doubt in regard to the accuracy of memory recall across time. Given that self-reported crash involvement is the predominant dependent variable used in the majority of road safety research, this issue warrants further investigation. Replication of the study with a larger sample size that includes multiple recall periods would enhance understanding into the significance of this issue for road safety methodology.
Resumo:
This practice-led research project aims to use contemporary art processes and concepts of fandom to construct a space for the critical and creative exploration of the relationship between them. Much of the discourse addressing the intersection of these spaces over the last three decades tends to treat art and fan studies as separate areas of critical and theoretical research. There has also been very little consideration of the critical interface that art practice and fandom share in their engagement with one another – or how the artist as fan might creatively exploit this relationship. Approaching these issues through a practice-led methodology that combines studio based explorations and traditional modes of research, the project aims to demonstrate how my 'fannish' engagements with popular culture can generate new responses to, and understandings of, the relationship between fandom, affect and visual art. The research acts as a performative and creative investigation of fandom as I document the complicit tendencies that arise out of my affective relationship with pop cultural artefacts. It does this through appropriating and reconfiguring content from film, television and print media, to create digital video installations aimed at engendering new experiences and critical interpretations of screen culture. This approach promotes new possibilities for creative engagements with art and popular culture, and these are framed through the lens of what I term the digital-bricoleur. The research will be primarily contextualised by examining other artists' practices as well as selected theoretical frameworks that traverse my investigative terrain. The key artists that are discussed include Douglas Gordon, Candice Brietz, Pierre Huyghe, Paul Pfieffer, and Jennifer and Kevin McCoy. The theoretical developments of the project are drawn from a pluralistic range of ideas ranging from Johanna Drucker's discussion of critical complicity in contemporary art, Matt Hills' discussion of subjectivity in fandom and academia, Nicolas Bourriaud's discussion of Postproduction art practices, and Jacques Rancière's ideas about aesthetics and politics. The methodology and artworks developed over the course of this project will also demonstrate how digital-bricolage leads to new understandings of the relationships between contemporary art and entertainment. The research aims to exploit these apparently contradictory positions to generate a productive site for rethinking the relationship between the creative and critical possibilities of art and fandom. The outcomes of the research consists of a body of artworks – 75% – that demonstrate new contributions to knowledge, and an exegetical component – 25% – that acts to reflect on, analyse and critically contextualise the practice-led findings.
Resumo:
"Living with Illness: Psychosocial Challenges focuses on developing and strengthening understanding of the illness experience. It encourages students to critically appraise conventional approaches to understanding and caring for those who are ill, to empower readers to off true holistic care and to, where appropriate, change nursing practice in light of current research findings. Traditionally nurses have drawn on knowledge from sociology and psychology as two separate but related disciplines to nursing, leaving the beginning level nurse to relate, integrate and translate knowledge gained into nursing practice. Living with Illness combines, in a unique way, sociological and psychological perspectives to creatively represent psychosocial knowledge that is innovative and directly applicable to contemporary nursing practice."-publisher website