918 resultados para Critical power model
Resumo:
This handbook chapter explores the relationship between critical theory and seminal studies of literacy which investigate inequities in education. It identifies new research questions to explore the connections between literacy and power, but go beyond promises of emancipation.
Resumo:
Vietnam has a unique culture which is revealed in the way that people have built and designed their traditional housing. Vietnamese dwellings reflect occupants’ activities in their everyday lives, while adapting to tropical climatic conditions impacted by seasoning monsoons. It is said that these characteristics of Vietnamese dwellings have remained unchanged until the economic reform in 1986, when Vietnam experienced an accelerated development based on the market-oriented economy. New housing types, including modern shop-houses, detached houses, and apartments, have been designed in many places, especially satisfying dwellers’ new lifestyles in Vietnamese cities. The contemporary housing, which has been mostly designed by architects, has reflected rules of spatial organisation so that occupants’ social activities are carried out. However, contemporary housing spaces seem unsustainable in relation to socio-cultural values because they has been influenced by globalism that advocates the use of homogeneous spatial patterns, modern technologies, materials and construction methods. This study investigates the rules of spaces in Vietnamese houses that were built before and after the reform to define the socio-cultural implications in Vietnamese housing design. Firstly, it describes occupants’ views of their current dwellings in terms of indoor comfort conditions and social activities in spaces. Then, it examines the use of spaces in pre-reform Vietnamese housing through occupants’ activities and material applications. Finally, it discusses the organisation of spaces in both pre- and post-reform housing to understand how Vietnamese housing has been designed for occupants to live, act, work, and conduct traditional activities. Understanding spatial organisation is a way to identify characteristics of the lived spaces of the occupants created from the conceived space, which is designed by designers. The characteristics of the housing spaces will inform the designers the way to design future Vietnamese housing in response to cultural contexts. The study applied an abductive approach for the investigation of housing spaces. It used a conceptual framework in relation to Henri Lefebvre’s (1991) theory to understand space as the main factor constituting the language of design, and the principles of semiotics to examine spatial structure in housing as a language used in the everyday life. The study involved a door-knocking survey to 350 households in four regional cities of Vietnam for interpretation of occupancy conditions and levels of occupants’ comfort. A statistical analysis was applied to interpret the survey data. The study also required a process of data selection and collection of fourteen cases of housing in three main climatic regions of the country for analysing spatial organisation and housing characteristics. The study found that there has been a shift in the relationship of spaces from the pre- to post-reform Vietnamese housing. It also indentified that the space for guest welcoming and family activity has been the central space of the Vietnamese housing. Based on the relationships of the central space with the others, theoretical models were proposed for three types of contemporary Vietnamese housing. The models will be significant in adapting to Vietnamese conditions to achieve socioenvironmental characteristics for housing design because it was developed from the occupants’ requirements for their social activities. Another contribution of the study is the use of methodological concepts to understand the language of living spaces. Further work will be needed to test future Vietnamese housing designs from the applications of the models.
Resumo:
Aim This paper reports on the development and evaluation of an integrated clinical learning model to inform ongoing education for surgical nurses. The research aim was to evaluate the effectiveness of implementing a Respiratory Skills Update (ReSKU) education program, in the context of organisational utility, on improving surgical nurses' practice in the area of respiratory assessment. Background Continuous development and integration of technological innovations and research in the healthcare environment mandate the need for continuing education for nurses. Despite an increased worldwide emphasis on this, there is scant empirical evidence of program effectiveness. Methods A quasi experimental pre test, post test non–equivalent control group design evaluated the impact of the ReSKU program on surgical nurses' clinical practice. The 2008 study was conducted in a 400 bed regional referral public hospital and was consistent with contemporary educational approaches using multi-modal, interactive teaching strategies. Findings The study demonstrated statistically significant differences between groups regarding reported use of respiratory skills, three months after ReSKU program attendance. Between group data analysis indicated that the intervention group's reported beliefs and attitudes pertaining to subscale descriptors showed statistically significant differences in three of the six subscales. Conclusion The construct of critical thinking in the clinical context, combined with clinical reasoning and purposeful reflection, was a powerful educational strategy to enhance competency and capability in clinicians.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Power system restoration after a large area outage involves many factors, and the procedure is usually very complicated. A decision-making support system could then be developed so as to find the optimal black-start strategy. In order to evaluate candidate black-start strategies, some indices, usually both qualitative and quantitative, are employed. However, it may not be possible to directly synthesize these indices, and different extents of interactions may exist among these indices. In the existing black-start decision-making methods, qualitative and quantitative indices cannot be well synthesized, and the interactions among different indices are not taken into account. The vague set, an extended version of the well-developed fuzzy set, could be employed to deal with decision-making problems with interacting attributes. Given this background, the vague set is first employed in this work to represent the indices for facilitating the comparisons among them. Then, a concept of the vague-valued fuzzy measure is presented, and on that basis a mathematical model for black-start decision-making developed. Compared with the existing methods, the proposed method could deal with the interactions among indices and more reasonably represent the fuzzy information. Finally, an actual power system is served for demonstrating the basic features of the developed model and method.
Resumo:
A critical step in the dissemination of ovarian cancer is the formation of multicellular spheroids from cells shed from the primary tumour. The objectives of this study were to apply bioengineered three-dimensional (3D) microenvironments for culturing ovarian cancer spheroids in vitro and simultaneously to build on a mathematical model describing the growth of multicellular spheroids in these biomimetic matrices. Cancer cells derived from human epithelial ovarian carcinoma were embedded within biomimetic hydrogels of varying stiffness and grown for up to 4 weeks. Immunohistochemistry, imaging and growth analyses were used to quantify the dependence of cell proliferation and apoptosis on matrix stiffness, long-term culture and treatment with the anti-cancer drug paclitaxel. The mathematical model was formulated as a free boundary problem in which each spheroid was treated as an incompressible porous medium. The functional forms used to describe the rates of cell proliferation and apoptosis were motivated by the experimental work and predictions of the mathematical model compared with the experimental output. This work aimed to establish whether it is possible to simulate solid tumour growth on the basis of data on spheroid size, cell proliferation and cell death within these spheroids. The mathematical model predictions were in agreement with the experimental data set and simulated how the growth of cancer spheroids was influenced by mechanical and biochemical stimuli including matrix stiffness, culture duration and administration of a chemotherapeutic drug. Our computational model provides new perspectives on experimental results and has informed the design of new 3D studies of chemoresistance of multicellular cancer spheroids.
Resumo:
Power system operation and planning are facing increasing uncertainties especially with the deregulation process and increasing demand for power. Probabilistic power system stability assessment and probabilistic power system planning have been identified by EPRI as one of the important trends in power system operations and planning. Probabilistic small signal stability assessment studies the impact of system parameter uncertainties on system small disturbance stability characteristics. Researches in this area have covered many uncertainties factors such as controller parameter uncertainties and generation uncertainties. One of the most important factors in power system stability assessment is load dynamics. In this paper, composite load model is used to consider the uncertainties from load parameter uncertainties impact on system small signal stability characteristics. The results provide useful insight into the significant stability impact brought to the system by load dynamics. They can be used to help system operators in system operation and planning analysis.
Resumo:
Introduction QC and EQA are integral to good pathology laboratory practice. Medical Laboratory Science students undertake a project exploring internal QC and EQA procedures used in chemical pathology laboratories. Each student represents an individual lab and the class group represents the peer group of labs performing the same assay using the same method. Methods Using a manual BCG assay for serum albumin, normal and abnormal controls are run with a patient sample over 7 weeks. The QC results are assessed each week using calculated z-scores and both 2S & 3S control rules to determine whether a run is ‘in control’. At the end of the 7 weeks a completed LJ chart is assessed using the Westgard Multirules. Students investigate causes of error and the implications for both lab practice and patient care if runs are not ‘in control’. Twice in the 7 weeks two EQA samples (with target values unknown) are assayed alongside the weekly QC and patient samples. Results from each student are collated and form the basis of an EQA program. ALP are provided and students complete a Youden Plot, which is used to analyse the performance of each ‘lab’ and the method to identify bias. Students explore the concept of possible clinical implications of a biased method and address the actions that should be taken if a lab is not in consensus with the peer group. Conclusion This project is a model of ‘real world’ practice in which student demonstrate an understanding of the importance of QC procedures in a pathology laboratory, apply and interpret statistics and QC rules and charts, apply critical thinking and analytical skills to quality performance data to make recommendations for further practice and improve their technical competence and confidence.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
This paper investigates the critical role of knowledge sharing (KS) in leveraging manufacturing activities, namely integrated supplier management (ISM) and new product development (NPD) to improve business performance (BP) within the context of Taiwanese electronic manufacturing companies. The research adopted a sequential mixed method research design, which provided both quantitative empirical evidence as well as qualitative insights, into the moderating effect of KS on the relationships between these two core manufacturing activities and BP. First, a questionnaire survey was administered, which resulted in a sample of 170 managerial and technical professionals providing their opinions on KS, NPD and ISM activities and the BP level within their respective companies. On the basis of the collected data, factor analysis was used to verify the measurement model, followed by correlation analysis to explore factor interrelationships, and finally moderated regression analyses to extract the moderating effects of KS on the relationships of NPD and ISM with BP. Following the quantitative study, six semi-structured interviews were conducted to provide qualitative in-depth insights into the value added from KS practices to the targeted manufacturing activities and the extent of its leveraging power. Results from quantitative statistical analysis indicated that KS, NPD and ISM all have a significant positive impact on BP. Specifically, IT infrastructure and open communication were identified as the two types of KS practices that could facilitate enriched supplier evaluation and selection, empower active employee involvement in the design process, and provide support for product simplification and the modular design process, thereby improving manufacturing performance and strengthening company competitiveness. The interviews authenticated many of the empirical findings, suggesting that in the contemporary manufacturing context KS has become an integral part of many ISM and NPD activities and when embedded properly can lead to an improvement in BP. The paper also highlights a number of useful implications for manufacturing companies seeking to leverage their BP through innovative and sustained KS practices.
Resumo:
A general electrical model of a piezoelectric transducer for ultrasound applications consists of a capacitor in parallel with RLC legs. A high power voltage source converter can however generate significant voltage stress across the transducer that creates high leakage currents. One solution is to reduce the voltage stress across the piezoelectric transducer by using an LC filter, however a main drawback is changing the piezoelectric resonant frequency and its characteristics. Thereby it reduces the efficiency of energy conversion through the transducer. This paper proposes that a high frequency current source converter is a suitable topology to drive high power piezoelectric transducers efficiently.
Resumo:
Australian universities are currently engaging with new governmental policies and regulations that require them to demonstrate enhanced quality and accountability in teaching and research. The development of national academic standards for learning outcomes in higher education is one such instance of this drive for excellence. These discipline-specific standards articulate the minimum, or Threshold Learning Outcomes, to be addressed by higher education institutions so that graduating students can demonstrate their achievement to their institutions, accreditation agencies, and industry recruiters. This impacts not only on the design of Engineering courses (with particular emphasis on pedagogy and assessment), but also on the preparation of academics to engage with these standards and implement them in their day-to-day teaching practice on a micro level. This imperative for enhanced quality and accountability in teaching is also significant at a meso level, for according to the Australian Bureau of Statistics, about 25 per cent of teachers in Australian universities are aged 55 and above and more than 54 per cent are aged 45 and above (ABS, 2006). A number of institutions have undertaken recruitment drives to regenerate and enrich their academic workforce by appointing capacity-building research professors and increasing the numbers of early- and mid-career academics. This nationally driven agenda for quality and accountability in teaching permeates also the micro level of engineering education, since the demand for enhanced academic standards and learning outcomes requires both a strong advocacy for a shift to an authentic, collaborative, outcomes-focused education and the mechanisms to support academics in transforming their professional thinking and practice. Outcomes-focused education means giving greater attention to the ways in which the curriculum design, pedagogy, assessment approaches and teaching activities can most effectively make a positive, verifiable difference to students’ learning. Such education is authentic when it is couched firmly in the realities of learning environments, student and academic staff characteristics, and trustworthy educational research. That education will be richer and more efficient when staff works collaboratively, contributing their knowledge, experience and skills to achieve learning outcomes based on agreed objectives. We know that the school or departmental levels of universities are the most effective loci of changes in approaches to teaching and learning practices in higher education (Knight & Trowler, 2000). Heads of Schools are being increasingly entrusted with more responsibilities - in addition to setting strategic directions and managing the operational and sometimes financial aspects of their school, they are also expected to lead the development and delivery of the teaching, research and other academic activities. Guiding and mentoring individuals and groups of academics is one critical aspect of the Head of School’s role. Yet they do not always have the resources or support to help them mentor staff, especially the more junior academics. In summary, the international trend in undergraduate engineering course accreditation towards the demonstration of attainment of graduate attributes poses new challenges in addressing academic staff development needs and the assessment of learning. This paper will give some insights into the conceptual design, implementation and empirical effectiveness to date, of a Fellow-In-Residence Engagement (FIRE) program. The program is proposed as a model for achieving better engagement of academics with contemporary issues and effectively enhancing their teaching and assessment practices. It will also report on the program’s collaborative approach to working with Heads of Schools to better support academics, especially early-career ones, by utilizing formal and informal mentoring. Further, the paper will discuss possible factors that may assist the achievement of the intended outcomes of such a model, and will examine its contributions to engendering an outcomes-focussed thinking in engineering education.
Resumo:
We introduce the Network Security Simulator (NeSSi2), an open source discrete event-based network simulator. It incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Compared to the predecessor NeSSi, it was extended with a three-tier plugin architecture and a generic network model to shift its focus towards simulation framework for critical infrastructures. We demonstrate the gained adaptability by different use cases