975 resultados para limitations of therapy


Relevância:

90.00% 90.00%

Publicador:

Resumo:

There has been an increasing interest by governments worldwide in the potential benefits of open access to public sector information (PSI). However, an important question remains: can a government incur tortious liability for incorrect information released online under an open content licence? This paper argues that the release of PSI online for free under an open content licence, specifically a Creative Commons licence, is within the bounds of an acceptable level of risk to government, especially where users are informed of the limitations of the data and appropriate information management policies and principles are in place to ensure accountability for data quality and accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the identification of common single locus point mutations as risk factors for thrombophilia, many DNA testing methodologies have been described for detecting these variations. Traditionally, functional or immunological testing methods have been used to investigate quantitative anticoagulant deficiencies. However, with the emergence of the genetic variations, factor V Leiden, prothrombin 20210 and, to a lesser extent, the methylene tetrahydrofolate reductase (MTHFR677) and factor V HR2 haplotype, traditional testing methodologies have proved to be less useful and instead DNA technology is more commonly employed in diagnostics. This review considers many of the DNA techniques that have proved to be useful in the detection of common genetic variants that predispose to thrombophilia. Techniques involving gel analysis are used to detect the presence or absence of restriction sites, electrophoretic mobility shifts, as in single strand conformation polymorphism or denaturing gradient gel electrophoresis, and product formation in allele-specific amplification. Such techniques may be sensitive, but are unwielding and often need to be validated objectively. In order to overcome some of the limitations of gel analysis, especially when dealing with larger sample numbers, many alternative detection formats, such as closed tube systems, microplates and microarrays (minisequencing, real-time polymerase chain reaction, and oligonucleotide ligation assays) have been developed. In addition, many of the emerging technologies take advantage of colourimetric or fluorescence detection (including energy transfer) that allows qualitative and quantitative interpretation of results. With the large variety of DNA technologies available, the choice of methodology will depend on several factors including cost and the need for speed, simplicity and robustness. © 2000 Lippincott Williams & Wilkins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emergence of Twenty20 cricket at the elite level has been marketed on the excitement of the big hitter, where it seems that winning is a result of the muscular batter hitting boundaries at will. This version of the game has captured the imagination of many young players who all want to score runs with “big hits”. However, in junior cricket, boundary hitting is often more difficult due to size limitations of children and games played on outfields where the ball does not travel quickly. As a result, winning is often achieved via a less spectacular route – by scoring more singles than your opponents. However, most standard coaching texts only describe how to play boundary scoring shots (e.g. the drives, pulls, cuts and sweeps) and defensive shots to protect the wicket. Learning to bat appears to have been reduced to extremes of force production, i.e. maximal force production to hit boundaries or minimal force production to stop the ball from hitting the wicket. Initially, this is not a problem because the typical innings of a young player (<12 years) would be based on the concept of “block” or “bash” – they “block” the good balls and “bash” the short balls. This approach works because there are many opportunities to hit boundaries off the numerous inaccurate deliveries of novice bowlers. Most runs are scored behind the wicket by using the pace of the bowler’s delivery to re-direct the ball, because the intrinsic dynamics (i.e. lack of strength) of most children means that they can only create sufficient power by playing shots where the whole body can contribute to force production. This method works well until the novice player comes up against more accurate bowling when they find they have no way of scoring runs. Once batters begin to face “good” bowlers, batters have to learn to score runs via singles. In cricket coaching manuals (e.g. ECB, n.d), running between the wickets is treated as a separate task to batting, and the “basics” of running, such as how to “back- up”, carry the bat, calling and turning and sliding the bat into the crease are “drilled” into players. This task decomposition strategy focussing on techniques is a common approach to skill acquisition in many highly traditional sports, typified in cricket by activities where players hit balls off tees and receive “throw-downs” from coaches. However, the relative usefulness of these approaches in the acquisition of sporting skills is increasingly being questioned (Pinder, Renshaw & Davids, 2009). We will discuss why this is the case in the next section.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given the recent emergence of the smart grid and smart grid related technologies, their security is a prime concern. Intrusion detection provides a second line of defense. However, conventional intrusion detection systems (IDSs) are unable to adequately address the unique requirements of the smart grid. This paper presents a gap analysis of contemporary IDSs from a smart grid perspective. This paper highlights the lack of adequate intrusion detection within the smart grid and discusses the limitations of current IDSs approaches. The gap analysis identifies current IDSs as being unsuited to smart grid application without significant changes to address smart grid specific requirements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Crack is a significant influential factor in soil slope that could leads to rainfall-induced slope instability. Existence of cracks at soil surface will decrease the shear strength and increase the hydraulic conductivity of soil slope. Although previous research has shown the effect of surface-cracks in soil stability, the influence of deep-cracks on soil stability is still unknown. The limited availability of deep crack data due to the difficulty of effective investigate methods could be one of the obstacles. Current technology in electrical resistivity can be used to detect deep-cracks in soil. This paper discusses deep cracks in unsaturated residual soil slopes in Indonesia using electrical resistivity method. The field investigation such as bore hole and SPT tests was carried out at multiple locations in the area where the electrical resistivity testing have been conducted. Subsequently, the results from bore-hole and SPT test were used to verify the results of the electrical resistivity test. This study demonstrates the benefits and limitations of the electrical resistivity in detecting deep-cracks in a residual soil slopes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article examines the effectiveness of school-based drug prevention programs in preventing illicit drug use. Our article reports the results of a systematic review of the evaluation literature to answer three fundamental questions: (1) do school-based drug prevention programs reduce rates of illicit drug use? (2) what features are characteristic of effective programs? and (3) do these effective program characteristics differ from those identified as effective in reviews of school-based drug prevention of licit substance use (such as alcohol and tobacco)? Using systematic review and meta-analytic techniques, we identify the characteristics of schoolbased drug prevention programs that have a significant and beneficial impact on ameliorating illicit substance use (i.e., narcotics) among young people. Successful intervention programs typically involve high levels of interactivity, time-intensity, and universal approaches that are delivered in the middle school years. These program characteristics aligned with many of the effective program elements found in previous reviews exploring the impact of school-based drug prevention on licit drug use. Contrary to these past reviews, however, our analysis suggests that the inclusion of booster sessions and multifaceted drug prevention programs have little impact on preventing illicit drug use among school-aged children. Limitations of the current review and policy implications are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is significant interest in Human-computer interaction methods that assist in the design of applications for use by children. Many of these approaches draw upon standard HCI methods,such as personas, scenarios, and probes. However, often these techniques require communication and kinds of thinking skills that are designer centred,which prevents children with Autism Spectrum Disorders or other learning and communication disabilities from being able to participate. This study investigates methods that might be used with children with ASD or other learning and communication disabilities to inspire the design of technology based intervention approaches to support their speech and language development. Similar to Iversen and Brodersen, we argue that children with ASD should not be treated as being in some way “cognitively incomplete”. Rather they are experts in their everyday lives and we cannot design future IT without involving them. However, how do we involve them Instead of beginning with HCI methods, we draw upon easy to use technologies and methods used in the therapy professions for child engagement, particularly utilizing the approaches of Hanen (2011) and Greenspan (1998). These approaches emphasize following the child’s lead and ensuring that the child always has a legitimate turn at a detailed level of interaction. In a pilot project, we have studied a child’s interactions with their parents about activities over which they have control – photos that they have taken at school on an iPad. The iPad was simple enough for this child with ASD to use and they enjoyed taking and reviewing photos. We use this small case study as an example of a child-led approach for a child with ASD. We examine interactions from this study in order to assess the possibilities and limitations of the child-led approach for supporting the design of technology based interventions to support speech and language development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) pose a serious threat to human and ecosystem health when washed off into receiving water bodies by stormwater. Climate change influenced rainfall characteristics makes the estimation of these pollutants in stormwater quite complex. The research study discussed in the paper developed a prediction framework for such pollutants under the dynamic influence of climate change on rainfall characteristics. It was established through principal component analysis (PCA) that the intensity and durations of low to moderate rain events induced by climate change mainly affect the wash-off of SVOCs and NVOCs from urban roads. The study outcomes were able to overcome the limitations of stringent laboratory preparation of calibration matrices by extracting uncorrelated underlying factors in the data matrices through systematic application of PCA and factor analysis (FA). Based on the initial findings from PCA and FA, the framework incorporated orthogonal rotatable central composite experimental design to set up calibration matrices and partial least square regression to identify significant variables in predicting the target SVOCs and NVOCs in four particulate fractions ranging from >300-1 μm and one dissolved fraction of <1 μm. For the particulate fractions range >300-1 μm, similar distributions of predicted and observed concentrations of the target compounds from minimum to 75th percentile were achieved. The inter-event coefficient of variations for particulate fractions of >300-1 μm were 5% to 25%. The limited solubility of the target compounds in stormwater restricted the predictive capacity of the proposed method for the dissolved fraction of <1 μm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Spirituality and religiosity have traditionally had a troubled relationship with psychology. However, a new field of study has emerged that is examining the health benefits of spirituality and religion. The current study examined the relationship between spirituality, religiosity and coping among a group of university students facing exams. Participants completed the Spiritual Well-Being Scale, Age Universal Religious Orientation Scale, Spiritual Transcendence Scale, Brief COPE, Test Anxiety Inventory, and State Trait Anxiety Inventory. Regression analyses found that existential well-being as measured by the Spiritual Well Being Scale was the best predictor of reduced anxiety. Maladaptive coping, however, was found to be inversely related to spirituality and religiosity, but highly predictive of elevated anxiety in this sample. Strengths and limitations of this study along with recommendations for further research are made.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The National Road Safety Strategy 2011-2020 outlines plans to reduce the burden of road trauma via improvements and interventions relating to safe roads, safe speeds, safe vehicles, and safe people. It also highlights that a key aspect in achieving these goals is the availability of comprehensive data on the issue. The use of data is essential so that more in-depth epidemiologic studies of risk can be conducted as well as to allow effective evaluation of road safety interventions and programs. Before utilising data to evaluate the efficacy of prevention programs it is important for a systematic evaluation of the quality of underlying data sources to be undertaken to ensure any trends which are identified reflect true estimates rather than spurious data effects. However, there has been little scientific work specifically focused on establishing core data quality characteristics pertinent to the road safety field and limited work undertaken to develop methods for evaluating data sources according to these core characteristics. There are a variety of data sources in which traffic-related incidents and resulting injuries are recorded, which are collected for a variety of defined purposes. These include police reports, transport safety databases, emergency department data, hospital morbidity data and mortality data to name a few. However, as these data are collected for specific purposes, each of these data sources suffers from some limitations when seeking to gain a complete picture of the problem. Limitations of current data sources include: delays in data being available, lack of accurate and/or specific location information, and an underreporting of crashes involving particular road user groups such as cyclists. This paper proposes core data quality characteristics that could be used to systematically assess road crash data sources to provide a standardised approach for evaluating data quality in the road safety field. The potential for data linkage to qualitatively and quantitatively improve the quality and comprehensiveness of road crash data is also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of the study: The purpose of this study is to investigate the influence of cultural diversity, in a multicultural nursing workforce, on the quality and safety of patient care and the work environment at King Abdul-Aziz Medical City, Riyadh region. Study background: Due to global migration and workforce mobility, to varying degrees, cultural diversity exists in most health services around the world, particularly occurring where the health care workforce is multicultural or where the domestic population comprises minority groups from different cultures speaking different languages. Further complexities occur when countries have a multicultural workforce which is different from the population for whom they care, in addition to the workers being from culturally diverse countries and with different languages. In Saudi Arabia the health system is mainly staffed by expatriate nurses who comprise 67.7% of the total number of nurses. Study design: This research utilised a case study design which incorporated multiple methods including survey, qualitative interviews and document review. Methods: The participant nurses were selected for the survey via a population sampling strategy; 319 nurses returned their completed Safety Climate Survey questionnaires. Descriptive and inferential statistics (Kruskal–Wallis test) were used to analyse survey data. For the qualitative component of the study, a purposive sampling strategy was used; 24 nurses were interviewed using a semi-structured interview technique. The documentary review included KAMC-R policy documents that met the inclusion criteria using a predetermined data abstraction instrument. Content analysis was used to analyse the policy documents data. Results: The data revealed the nurses‘ perceptions of the clinical climate in this multicultural environment is that it was unsafe, with a mean score of 3.9 out of 5. No significant difference was detected between the age groups or years of experience of the nurses and the perception of safety climate in this context; the study did reveal a statistically significant difference between the cultural background categories and the perception of safety climate. The qualitative phase indicated that the nurses within this environment were struggling to achieve cultural competence; consequently, they were having difficulties in meeting the patients‘ cultural and spiritual needs as well as maintaining a high standard of care. The results also indicated that nurses were disempowered in this context. Importantly, there was inadequate support by the organisation to manage the cultural diversity issue and to protect patients from any associated risks, as demonstrated by the policy documents and supported by the nurses‘ experiences. The study also illustrated the limitations of the conceptual framework of cultural competence when tested in this multicultural workforce context. Therefore, this study generated amendments to the model that is suitable to be used in the context of a multicultural nursing workforce. Conclusion: The multicultural nature of this nursing work environment is inherently risky due to the conflicts that arise from the different cultural norms, beliefs, behaviours and languages. Further, there was uncertainty within the multicultural nursing workforce about the clinical and cultural safety of the patient care environment and about the cultural safety of the nursing workforce. The findings of the study contribute important new knowledge to the area of patient and nurse safety in a multicultural environment and contribute theoretical development to the field of cultural competence. Specifically, the findings will inform policy and practice related to patient care in the context of cultural diversity.