970 resultados para textual complexity assessment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: 1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; 2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and 3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recognizing neonatal pain is a challenge for nurses working with newborns due to the complexity of the pain phenomenon. Pain is subjective, and infants lack the ability to communicate, and their pain is difficult to recognize. The purpose of this study is to determine the effectiveness of education on the NICU nurses' ability to assess neonatal pain. With a better understanding of pain theory and the effects of pain on the newborn the nurse will be better able to assess newborns with pain. Designed as a quasi-experimental one-group pretest and posttest study, the data was collected on a convenience sample of 49 registered nurses employed in the neonatal and special care nursery units at a Childrens Hospital in the Miami area. The nurses were surveyed on the assessment of neonatal pain using the General Information and Pain Sensitivity Questionnaire. After the initial survey, the nurses were inserviced on neonatal pain assessment using a one hour inservice education program. One week after the intervention the nurse was asked to complete the questionnaire again. Data analysis involved comparision of pre and post intervention findings using descriptive methods, t test, correlation coefficients, and ANOVA , where applicable. Findings revealed a significant ( p=.006) increase in nurse's knowledge of neonatal pain assessment after completing the educational inservice when comparing the pre-test and post-test results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complexity science is the multidisciplinary study of complex systems. Its marked network orientation lends itself well to transport contexts. Key features of complexity science are introduced and defined, with a specific focus on the application to air traffic management. An overview of complex network theory is presented, with examples of its corresponding metrics and multiple scales. Complexity science is starting to make important contributions to performance assessment and system design: selected, applied air traffic management case studies are explored. The important contexts of uncertainty, resilience and emergent behaviour are discussed, with future research priorities summarised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of information technology, the theory and methodology of complex network has been introduced to the language research, which transforms the system of language in a complex networks composed of nodes and edges for the quantitative analysis about the language structure. The development of dependency grammar provides theoretical support for the construction of a treebank corpus, making possible a statistic analysis of complex networks. This paper introduces the theory and methodology of the complex network and builds dependency syntactic networks based on the treebank of speeches from the EEE-4 oral test. According to the analysis of the overall characteristics of the networks, including the number of edges, the number of the nodes, the average degree, the average path length, the network centrality and the degree distribution, it aims to find in the networks potential difference and similarity between various grades of speaking performance. Through clustering analysis, this research intends to prove the network parameters’ discriminating feature and provide potential reference for scoring speaking performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To learn complex skills, like collaboration, learners need to acquire a concrete and consistent mental model of what it means to master this skill. If learners know their current mastery level and know their targeted mastery level, they can better determine their subsequent learning activities. Rubrics support learners in judging their skill performance as they provide textual descriptions of skills’ mastery levels with performance indicators for all constituent subskills. However, text-based rubrics have a limited capacity to support the formation of mental models with contextualized, time-related and observable behavioral aspects of a complex skill. This paper outlines the design of a study that intends to investigate the effect of rubrics with video modelling examples compared to text-based rubrics on skills acquisition and feedback provisioning. The hypothesis is that video-enhanced rubrics, compared to text based rubrics, will improve mental model formation of a complex skill and improve the feedback quality a learner receives (from e.g. teachers, peers) while practicing a skill, hence positively effecting final mastery of a skill.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper presentation at the TEA2016 conference, Tallinn, Estonia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we describe how the pathfinder algorithm converts relatedness ratings of concept pairs to concept maps; we also present how this algorithm has been used to develop the Concept Maps for Learning website (www.conceptmapsforlearning.com) based on the principles of effective formative assessment. The pathfinder networks, one of the network representation tools, claim to help more students memorize and recall the relations between concepts than spatial representation tools (such as Multi- Dimensional Scaling). Therefore, the pathfinder networks have been used in various studies on knowledge structures, including identifying students’ misconceptions. To accomplish this, each student’s knowledge map and the expert knowledge map are compared via the pathfinder software, and the differences between these maps are highlighted. After misconceptions are identified, the pathfinder software fails to provide any feedback on these misconceptions. To overcome this weakness, we have been developing a mobile-based concept mapping tool providing visual, textual and remedial feedback (ex. videos, website links and applets) on the concept relations. This information is then placed on the expert concept map, but not on the student’s concept map. Additionally, students are asked to note what they understand from given feedback, and given the opportunity to revise their knowledge maps after receiving various types of feedback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Audiovisual Storytelling and Ideological Horizons: Audiences, Cultural Contexts and Extra-textual Meaning Making In a society characterized by mediatization people are to an increasing degree dependent on mediated narratives as a primary means by which we make sense of our experience through time and our place in society (Hoover 2006, Lynch 2007, Hjarvard 2008, Hjarvard & Lövheim 2012). American media scholar Stewart Hoover points to symbols and scripts available in the media environment, what he call the “symbolic inventory” out of which individuals make religious or spiritual meaning (Hoover 2006: 55). Vernacular meaning-making embedded in everyday life among viewers’ dealing with fiction narratives in films and tv-series highlight a need for a more nuanced understanding of complex audiovisual storytelling. Moving images provide individuals with stories by which reality is maintained and by which humans construct ordered micro-universes for themselves using film as a resource for moral assessment and ideological judgments about life (Plantinga 2009, Johnston 2010, Axelson 2015). Important in this theoretical context are perspectives on viewers’ moral frameworks (Zillman 2005, Andersson & Andersson 2005, Frampton 2006, Avila 2007).This paper presentation will focus on ideological contested meaning making where audiences of different cultural background engage emotionally with filmic narratives, possibly eliciting ideological and spiritual meaning-making related to viewers’ personal world views. Through the example of the Homeland tv-series I want to discuss how spectators’ cultural, religious, political and ideological identities could be understood playing a role in the interpretative process of decoding content. Is it possible to trace patterns of different receptions of the multilayered and ambiguous story depicted in Homeland by religiously engaged Christians and Moslems as well as non-believers, in America, Europe and Middle East? How is the fiction narrative dealt with by spectators in the audience in different cultural contexts and how is it interpreted through the process of extra-text evaluation and real world2understanding in a global era preoccupied with war on terror? The presentation will also discuss methodological considerations about how to reach out to audiences anchored in different cultural context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews the literature of construction risk modelling and assessment. It also reviews the real practice of risk assessment. The review resulted in significant results, summarised as follows. There has been a major shift in risk perception from an estimation variance into a project attribute. Although the Probability–Impact risk model is prevailing, substantial efforts are being put to improving it reflecting the increasing complexity of construction projects. The literature lacks a comprehensive assessment approach capable of capturing risk impact on different project objectives. Obtaining a realistic project risk level demands an effective mechanism for aggregating individual risk assessments. The various assessment tools suffer from low take-up; professionals typically rely on their experience. It is concluded that a simple analytical tool that uses risk cost as a common scale and utilises professional experience could be a viable option to facilitate closing the gap between theory and practice of risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncovering mechanisms of unknown pathological mechanisms and body response to applied medication are the drive forces toward personalized medicine. In this post-genomic era, all eyes are tuned to proteomic field, searching for the answers and explanations by investigating the final physiological functional units – proteins and their proteoforms. Development of cutting-edge mass spectrometric technologies and powerful bioinformatics tools, allowed life-science community mining of disease-specific proteins as biomarkers, which are often hidden by high complexity of the samples and/or small abundance. Nowadays, there are several proteomics-based approaches to study the proteome. This chapter focuses on gold standard proteomics strategies and related issues towards candidate biomarker discovery, which may have diagnostic/prognostic as well as mechanistic utility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shows recommended changes at the Childs Park recreation area within the N.R.A. on the Pa. side of the Delaware River.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Information and communication technologies (ICTs, henceforth) have become ubiquitous in our society. The plethora of devices competing with the computer, from iPads to the Interactive whiteboard, just to name a few, has provided teachers and students alike with the ability to communicate and access information with unprecedented accessibility and speed. It is only logical that schools reflect these changes given that their purpose is to prepare students for the future. Surprisingly enough, research indicates that ICT integration into teaching activities is still marginal. Many elementary and secondary schoolteachers are not making effective use of ICTs in their teaching activities as well as in their assessment practices. The purpose of the current study is a) to describe Quebec ESL teachers’ profiles of using ICTs in their daily teaching activities; b) to describe teachers’ ICT integration and assessment practices; and c) to describe teachers’ social representations regarding the utility and relevance of ICT use in their daily teaching activities and assessment practices. In order to attain our objectives, we based our theoretical framework, principally, on the social representations (SR, henceforth) theory and we defined most related constructs which were deemed fundamental to the current thesis. We also collected data from 28 ESL elementary and secondary school teachers working in public and private sectors. The interview guide used to that end included a range of items to elicit teachers’ SR in terms of ICT daily use in teaching activities as well as in assessment practices. In addition, we carried out our data analyses from a textual statistics perspective, a particular mode of content analysis, in order to extract the indicators underlying teachers’ representations of the teachers. The findings suggest that although almost all participants use a wide range of ICT tools in their practices, ICT implementation is seemingly not exploited to its fullest potential and, correspondingly, is likely to produce limited effects on students’ learning. Moreover, none of the interviewees claim that they use ICTs in their assessment practices and they still hold to the traditional paper-based assessment (PBA, henceforth) approach of assessing students’ learning. Teachers’ common discourse reveals a gap between the positive standpoint with regards to ICT integration, on the one hand, and the actual uses of instructional technology, on the other. These results are useful for better understanding the way ESL teachers in Quebec currently view their use of ICTs, particularly for evaluation purposes. In fact, they provide a starting place for reconsidering the implementation of ICTs in elementary and secondary schools. They may also be useful to open up avenues for the development of a future research program in this regard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formation of reactive oxygen species (ROS) within cells causes damage to biomolecules, including membrane lipids, DNA, proteins and sugars. An important type of oxidative damage is DNA base hydroxylation which leads to the formation of 8-oxo-7,8-dihydro-29-deoxyguanosine (8-oxodG) and 5-hydroxymethyluracil (5-HMUra). Measurement of these biomarkers in urine is challenging, due to the low levels of the analytes and the matrix complexity. In order to simultaneously quantify 8-oxodG and 5-HMUra in human urine, a new, reliable and powerful strategy was optimised and validated. It is based on a semi-automatic microextraction by packed sorbent (MEPS) technique, using a new digitally controlled syringe (eVolH), to enhance the extraction efficiency of the target metabolites, followed by a fast and sensitive ultrahigh pressure liquid chromatography (UHPLC). The optimal methodological conditions involve loading of 250 mL urine sample (1:10 dilution) through a C8 sorbent in a MEPS syringe placed in the semi-automatic eVolH syringe followed by elution using 90 mL of 20% methanol in 0.01% formic acid solution. The obtained extract is directly analysed in the UHPLC system using a binary mobile phase composed of aqueous 0.1% formic acid and methanol in the isocratic elution mode (3.5 min total analysis time). The method was validated in terms of selectivity, linearity, limit of detection (LOD), limit of quantification (LOQ), extraction yield, accuracy, precision and matrix effect. Satisfactory results were obtained in terms of linearity (r2 . 0.991) within the established concentration range. The LOD varied from 0.00005 to 0.04 mg mL21 and the LOQ from 0.00023 to 0.13 mg mL21. The extraction yields were between 80.1 and 82.2 %, while inter-day precision (n=3 days) varied between 4.9 and 7.7 % and intra-day precision between 1.0 and 8.3 %. This approach presents as main advantages the ability to easily collect and store urine samples for further processing and the high sensitivity, reproducibility, and robustness of eVolHMEPS combined with UHPLC analysis, thus retrieving a fast and reliable assessment of oxidatively damaged DNA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este artículo presenta los resultados de una investigación realizada al interior de dos contextos. Por un lado, el teórico, en el marco de uno de los discursos más relevantes en los campos de la estrategia organizacional, de la managerial and organizational cognition (MOC) y, en general, de los estudios organizacionales (organization studies): la construcción de sentido (sensemaking). Por el otro, el empírico, en una de las grandes compañías multinacionales del sector automotriz con presencia global. Esta corporación enfrenta una permanente tensión entre lo que dicta la casa matriz, en relación con el cumplimiento de metas y estándares específicos, considerando el mundo entero, y los retos que, teniendo en cuenta lo regional y lo local, experimentan los altos directivos encargados de hacer prosperar la empresa en estos lugares. La aproximación implementada fue cualitativa. Esto en atención a la naturaleza de la problemática abordada y la tradición del campo. Los resultados permiten ampliar el actual nivel de comprensión acerca de los procesos de sensemaking de los altos directivos al enfrentar un entorno estratégico turbulento.