905 resultados para Negative dimensional integration method (NDIM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of three dimensional effects on isochromatic birefringence is evaluated for planar flows by means of numerical simulation. Two fluid models are investigated in channel and abrupt contraction geometries. In practice, the flows are confined by viewing windows, which alter the stresses along the optical path. The observed optical properties differ therefore from their counterpart in an ideal two-dimensional flow. To investigate the influence of these effects, the stress optical rule and the differential propagation Mueller matrix are used. The material parameters are selected so that a retardation of multiple orders is achieved, as is typical for highly birefringent melts. Errors due to three dimensional effects are mainly found on the symmetry plane, and increase significantly with the flow rate. Increasing the geometric aspect ratio improve the accuracy provided that the error on the retardation is less than one order. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adsorption of ethylene and ethane on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers is studied in detail to investigate the packing efficiency, the two-dimensional critical temperature, and the variation of the isosteric heat of adsorption with loading and temperature. Here we used a Monte Carlo simulation method with a grand canonical Monte Carlo ensemble. A number of two-center Lennard-Jones (LJ) potential models are investigated to study the impact of the choice of potential models in the description of adsorption behavior. We chose two 2C-LJ potential models in our investigation of the (i) UA-TraPPE-LJ model of Martin and Siepmann (J. Phys. Chem. B 1998,102, 25692577) for ethane and Wick et al. (J. Phys. Chem. B 2000,104, 8008-8016) for ethylene and (ii) AUA4-LJ model of Ungerer et al. (J. Chem. Phys. 2000,112, 5499-5510) for ethane and Bourasseau et al. (J. Chem. Phys. 2003, 118, 3020-3034) for ethylene. These models are used to study the adsorption of ethane and ethylene on graphitized thermal carbon black. It is found that the solid-fluid binary interaction parameter is a function of adsorbate and temperature, and the adsorption isotherms and heat of adsorption are well described by both the UA-TraPPE and AUA models, although the UA-TraPPE model performs slightly better. However, the local distributions predicted by these two models are slightly different. These two models are used to explore the two-dimensional condensation for the graphitized thermal carbon black, and these values are 110 K for ethylene and 120 K for ethane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Left atrial (LA) volume (LAV) is a prognostically important biomarker for diastolic dysfunction, but its reproducibility on repeated testing is not well defined. LA assessment with 3-dimensional. (3D) echocardiography (3DE) has been validated against magnetic resonance imaging, and we sought to assess whether this was superior to existing measurements for sequential echocardiographic follow-up. Methods: Patients (n = 100; 81 men; age 56 +/- 14 years) presenting for LA evaluation were studied with M-mode (MM) echocardiography, 2-dimensional (2D) echocardiography, and 3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 hour without alteration of hemodynamics or therapy. In all, 20 patients were studied for interobserver and intraobserver variation. LAVs were calculated by using M-mode diameter and planimetered atrial area in the apical. 4-chamber view to calculate an assumed sphere, as were prolate ellipsoid, Simpson's biplane, and biplane area-length methods. All were compared with 3DE. Results: The average LAV was 72 +/- 27 mL by 3DE. There was significant underestimation of LAV by M-mode (35 +/- 20 mL, r = 0.66, P < .01). The 3DE and various 2D echocardiographic techniques were well correlated: LA planimetry (85 +/- 38 mL, r = 0.77, P < .01), prolate ellipsoid (73 +/- 36 mL, r = 0.73, P = .04), area-length (64 +/- 30 mL, r = 0.74, P < .01), and Simpson's biplane (69 +/- 31 mL, r = 0.78, P = .06). Test-retest variation for 3DE was most favorable (r = 0.98, P < .01), with the prolate ellipsoid method showing most variation. Interobserver agreement between measurements was best for 3DE (r = 0.99, P < .01), with M-mode the worst (r = 0.89, P < .01). Intraobserver results were similar to interobserver, the best correlation for 3DE (r = 0.99, P < .01), with LA planimetry the worst (r = 0.91, P < .01). Conclusions. The 2D measurements correlate closely with 3DE. Follow-up assessment in daily practice appears feasible and reliable with both 2D and 3D approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In magnetic resonance imaging (MRI), the MR signal intensity can vary spatially and this spatial variation is usually referred to as MR intensity nonuniformity. Although the main source of intensity nonuniformity arises from B, inhomogeneity of the coil acting as a receiver and/or transmitter, geometric distortion also alters the MR signal intensity. It is useful on some occasions to have these two different sources be separately measured and analyzed. In this paper, we present a practical method for a detailed measurement of the MR intensity nonuniformity. This method is based on the same three-dimensional geometric phantom that was recently developed for a complete measurement of the geometric distortion in MR systems. In this paper, the contribution to the intensity nonuniformity from the geometric distortion can be estimated and thus, it provides a mechanism for estimation of the intensity nonuniformity that reflects solely the spatial characteristics arising from B-1. Additionally, a comprehensive scheme for characterization of the intensity nonuniformity based on the new measurement method is proposed. To demonstrate the method, the intensity nonuniformity in a 1.5 T Sonata MR system was measured and is used to illustrate the main features of the method. (c) 2005 American Association of Physicists in Medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a biventricular model, which couples the electrical and mechanical properties of the heart, and computer simulations of ventricular wall motion and deformation by means of a biventricular model. In the constructed electromechanical model, the mechanical analysis was based on composite material theory and the finite-element method; the propagation of electrical excitation was simulated using an electrical heart model, and the resulting active forces were used to calculate ventricular wall motion. Regional deformation and Lagrangian strain tensors were calculated during the systole phase. Displacements, minimum principal strains and torsion angle were used to describe the motion of the two ventricles. The simulations showed that during the period of systole, (1) the right ventricular free wall moves towards the septum, and at the same time, the base and middle of the free wall move towards the apex, which reduces the volume of the right ventricle; the minimum principle strain (E3) is largest at the apex, then at the middle of the free wall and its direction is in the approximate direction of the epicardial muscle fibres; (2) the base and middle of the left ventricular free wall move towards the apex and the apex remains almost static; the torsion angle is largest at the apex; the minimum principle strain E3 is largest at the apex and its direction on the surface of the middle wall of the left ventricle is roughly in the fibre orientation. These results are in good accordance with results obtained from MR tagging images reported in the literature. This study suggests that such an electromechanical biventricular model has the potential to be used to assess the mechanical function of the two ventricles, and also could improve the accuracy ECG simulation when it is used in heart torso model-based body surface potential simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research extends the consumer-based brand equity measurement approach to the measurement of the equity associated with retailers. This paper also addresses some of the limitations associated with current retailer equity measurement such as a lack of clarity regarding its nature and dimensionality. We conceptualise retailer equity as a four-dimensional construct comprising retailer awareness, retailer associations, perceived retailer quality, and retailer loyalty. The paper reports the result of an empirical study of a convenience sample of 601 shopping mall consumers at an Australian state capital city. Following a confirmatory factor analysis using structural equation modelling to examine the dimensionality of the retailer equity construct, the proposed model is tested for two retailer categories: department stores and speciality stores. Results confirm the hypothesised four-dimensional structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To examine adjustment in children of a parent with multiple sclerosis within a stress and coping framework and compare them with those who have 'healthy' parents. Subjects: A total of 193 participants between 10 and 25 years completed questionnaires; 48 youngsters who had a parent with multiple sclerosis and 145 youngsters who reported that they did not have a parent with an illness or disability. Method: A questionnaire survey methodology was used. Variable sets included caregiving context (e.g. additional parental illness, family responsibilities, parental functional impairment, choice in helping), social support (network size, satisfaction), stress appraisal, coping (problem solving, seeking support, acceptance, wishful thinking, denial), and positive (life satisfaction, positive affect, benefits) and negative (distress, health) adjustment outcomes. Results: Caregiving context variables significantly correlated with poorer adjustment in children of a parent with multiple sclerosis included additional parental illness, higher family responsibilities, parental functional impairment and unpredictability of the parent's multiple sclerosis, and less choice in helping. As predicted, better adjustment in children of a parent with multiple sclerosis was related to higher levels of social support, lower stress appraisals, greater reliance on approach coping strategies (problem solving, seeking support and acceptance) and less reliance on avoidant coping (wishful thinking and denial). Compared with children of 'healthy' parents, children of a parent with multiple sclerosis reported greater family responsibilities, less reliance on problem solving and seeking social support coping, higher somatization and lower life satisfaction and positive affect. Conclusions: Findings delineate the key impacts of young caregiving and support a stress and coping model of adjustment in children of a parent with multiple sclerosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sea-water intrusion is actively contaminating fresh groundwater reserves in the coastal aquifers of the Pioneer Valley,north-eastern Australia. A three-dimensional sea-water intrusion model has been developed using the MODHMS code to explore regional-scale processes and to aid assessment of management strategies for the system. A sea-water intrusion potential map, produced through analyses of the hydrochemistry, hydrology and hydrogeology, offsets model limitations by providing an alternative appraisal of susceptibility. Sea-water intrusion in the Pioneer Valley is not in equilibrium, and a potential exists for further landward shifts in the extent of saline groundwater. The model required consideration of tidal over-height (the additional hydraulic head at the coast produced by the action of tides), with over-height values in the range 0.5-0.9 m giving improved water-table predictions. The effect of the initial water-table condition dominated the sensitivity of the model to changes in the coastal hydraulic boundary condition. Several salination processes are probably occurring in the Pioneer Valley, rather than just simple landward sea-water advancement from modern sources of marine salts. The method of vertical discretisation (i.e. model-layer subdivision) was shown to introduce some errors in the prediction of watertable behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method is introduced to determine the nuclear magnetic resonance frequency of a donor (P-31) doped inside a silicon substrate under the influence of an applied electric field. This phosphorus donor has been suggested for operation as a qubit for the realization of a solid-state scalable quantum computer. The operation of the qubit is achieved by a combination of the rotation of the phosphorus nuclear spin through a globally applied magnetic field and the selection of the phosphorus nucleus through a locally applied electric field. To realize the selection function, it is required to know the relationship between the applied electric field and the change of the nuclear magnetic resonance frequency of phosphorus. In this study, based on the wave functions obtained by the effective-mass theory, we introduce an empirical correction factor to the wave functions at the donor nucleus. Using the corrected wave functions, we formulate a first-order perturbation theory for the perturbed system under the influence of an electric field. In order to calculate the potential distributions inside the silicon and the silicon dioxide layers due to the applied electric field, we use the multilayered Green's functions and solve an integral equation by the moment method. This enables us to consider more realistic, arbitrary shape, and three-dimensional qubit structures. With the calculation of the potential distributions, we have investigated the effects of the thicknesses of silicon and silicon dioxide layers, the relative position of the donor, and the applied electric field on the nuclear magnetic resonance frequency of the donor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information and content integration are believed to be a possible solution to the problem of information overload in the Internet. The article is an overview of a simple solution for integration of information and content on the Web. Previous approaches to content extraction and integration are discussed, followed by introduction of a novel technology to deal with the problems, based on XML processing. The article includes lessons learned from solving issues of changing webpage layout, incompatibility with HTML standards and multiplicity of the results returned. The method adopting relative XPath queries over DOM tree proves to be more robust than previous approaches to Web information integration. Furthermore, the prototype implementation demonstrates the simplicity that enables non-professional users to easily adopt this approach in their day-to-day information management routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Freshwater is extremely precious; but even more precious than freshwater is clean freshwater. From the time that 2/3 of our planet is covered in water, we have contaminated our globe with chemicals that have been used by industrial activities over the last century in a unprecedented way causing harm to humans and wildlife. We have to adopt a new scientific mindset in order to face this problem so to protect this important resource. The Water Framework Directive (European Parliament and the Council, 2000) is a milestone legislative document that transformed the way that water quality monitoring is undertaken across all Member States by introducing the Ecological and Chemical Status. A “good or higher” Ecological Status is expected to be achieved for all waterbodies in Europe by 2015. Yet, most of the European waterbodies, which are determined to be at risk, or of moderate to bad quality, further information will be required so that adequate remediation strategies can be implemented. To date, water quality evaluation is based on five biological components (phytoplankton, macrophytes and benthic algae, macroinvertebrates and fishes) and various hydromorphological and physicochemical elements. The evaluation of the chemical status is principally based on 33 priority substances and on 12 xenobiotics, considered as dangerous for the environment. This approach takes into account only a part of the numerous xenobiotics that can be present in surface waters and could not evidence all the possible causes of ecotoxicological stress that can act in a water section. The mixtures of toxic chemicals may constitute an ecological risk not predictable on the basis of the single component concentration. To improve water quality, sources of contamination and causes of ecological alterations need to be identified. On the other hand, the analysis of the community structure, which is the result of multiple processes, including hydrological constrains and physico-chemical stress, give back only a “photograph” of the actual status of a site without revealing causes and sources of the perturbation. A multidisciplinary approach, able to integrate the information obtained by different methods, such as community structure analysis and eco-genotoxicological studies, could help overcome some of the difficulties in properly identifying the different causes of stress in risk assessment. In synthesis, the river ecological status is the result of a combination of multiple pressures that, for management purposes and quality improvement, have to be disentangled from each other. To reduce actual uncertainty in risk assessment, methods that establish quantitative links between levels of contamination and community alterations are needed. The analysis of macrobenthic invertebrate community structure has been widely used to identify sites subjected to perturbation. Trait-based descriptors of community structure constitute a useful method in ecological risk assessment. The diagnostic capacity of freshwater biomonitoring could be improved by chronic sublethal toxicity testing of water and sediment samples. Requiring an exposure time that covers most of the species’ life cycle, chronic toxicity tests are able to reveal negative effects on life-history traits at contaminant concentrations well below the acute toxicity level. Furthermore, the responses of high-level endpoints (growth, fecundity, mortality) can be integrated in order to evaluate the impact on population’s dynamics, a highly relevant endpoint from the ecological point of view. To gain more accurate information about potential causes and consequences of environmental contamination, the evaluation of adverse effects at physiological, biochemical and genetic level is also needed. The use of different biomarkers and toxicity tests can give information about the sub-lethal and toxic load of environmental compartments. Biomarkers give essential information about the exposure to toxicants, such as endocrine disruptor compounds and genotoxic substances whose negative effects cannot be evidenced by using only high-level toxicological endpoints. The increasing presence of genotoxic pollutants in the environment has caused concern regarding the potential harmful effects of xenobiotics on human health, and interest on the development of new and more sensitive methods for the assessment of mutagenic and cancerogenic risk. Within the WFD, biomarkers and bioassays are regarded as important tools to gain lines of evidence for cause-effect relationship in ecological quality assessment. Despite the scientific community clearly addresses the advantages and necessity of an ecotoxicological approach within the ecological quality assessment, a recent review reports that, more than one decade after the publication of the WFD, only few studies have attempted to integrate ecological water status assessment and biological methods (namely biomarkers or bioassays). None of the fifteen reviewed studies included both biomarkers and bioassays. The integrated approach developed in this PhD Thesis comprises a set of laboratory bioassays (Daphnia magna acute and chronic toxicity tests, Comet Assay and FPG-Comet) newly-developed, modified tacking a cue from standardized existing protocols or applied for freshwater quality testing (ecotoxicological, genotoxicological and toxicogenomic assays), coupled with field investigations on macrobenthic community structures (SPEAR and EBI indexes). Together with the development of new bioassays with Daphnia magna, the feasibility of eco-genotoxicological testing of freshwater and sediment quality with Heterocypris incongruens was evaluated (Comet Assay and a protocol for chronic toxicity). However, the Comet Assay, although standardized, was not applied to freshwater samples due to the lack of sensitivity of this species observed after 24h of exposure to relatively high (and not environmentally relevant) concentrations of reference genotoxicants. Furthermore, this species demonstrated to be unsuitable also for chronic toxicity testing due to the difficult evaluation of fecundity as sub-lethal endpoint of exposure and complications due to its biology and behaviour. The study was applied to a pilot hydrographic sub-Basin, by selecting section subjected to different levels of anthropogenic pressure: this allowed us to establish the reference conditions, to select the most significant endpoints and to evaluate the coherence of the responses of the different lines of evidence (alteration of community structure, eco-genotoxicological responses, alteration of gene expression profiles) and, finally, the diagnostic capacity of the monitoring strategy. Significant correlations were found between the genotoxicological parameter Tail Intensity % (TI%) and macrobenthic community descriptors SPEAR (p<0.001) and EBI (p<0.05), between the genotoxicological parameter describing DNA oxidative stress (ΔTI%) and mean levels of nitrates (p<0.01) and between reproductive impairment (Failed Development % from D. magna chronic bioassays) and TI% (p<0.001) as well as EBI (p<0.001). While correlation among parameters demonstrates a general coherence in the response to increasing impacts, the concomitant ability of each single endpoint to be responsive to specific sources of stress is at the basis of the diagnostic capacity of the integrated approach as demonstrated by stations presenting a mismatch among the different lines of evidence. The chosen set of bioassays, as well as the selected endpoints, are not providing redundant indications on the water quality status but, on the contrary, are contributing with complementary pieces of information about the several stressors that insist simultaneously on a waterbody section providing this monitoring strategy with a solid diagnostic capacity. Our approach should provide opportunities for the integration of biological effects into monitoring programmes for surface water, especially in investigative monitoring. Moreover, it should provide a more realistic assessment of impact and exposure of aquatic organisms to contaminants. Finally this approach should provide an evaluation of drivers of change in biodiversity and its causalities on ecosystem function/services provision, that is the direct and indirect contributions to human well-being.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudo teve como objetivo principal analisar a relação entre a Liderança Transformacional, a Conversão do Conhecimento e a Eficácia Organizacional. Foram considerados como pressupostos teóricos conceitos consolidados sobre os temas desta relação, além de recentes pesquisas já realizadas em outros países e contextos organizacionais. Com base nisto identificou-se potencial estudo de um modelo que relacionasse estes três conceitos. Para tal considera-se que as organizações que buscam atingir Vantagem Competitiva e incorporam a Knowledge-Based View possam conquistar diferenciação frente a seus concorrentes. Nesse contexto o conhecimento ganha maior destaque e papel protagonista nestas organizações. Dessa forma criar conhecimento através de seus colaboradores, passa a ser um dos desafios dessas organizações ao passo que sugere melhoria de seus indicadores Econômicos, Sociais, Sistêmicos e Políticos, o que se define por Eficácia Organizacional. Portanto os modos de conversão do conhecimento nas organizações, demonstram relevância, uma vez que se cria e se converte conhecimentos através da interação entre o conhecimento existente de seus colaboradores. Essa conversão do conhecimento ou modelo SECI possui quatro modos que são a Socialização, Externalização, Combinação e Internalização. Nessa perspectiva a liderança nas organizações apresenta-se como um elemento capaz de influenciar seus colaboradores, propiciando maior dinâmica ao modelo SECI de conversão do conhecimento. Se identifica então na liderança do tipo Transformacional, características que possam influenciar colaboradores e entende-se que esta relação entre a Liderança Transformacional e a Conversão do Conhecimento possa ter influência positiva nos indicadores da Eficácia Organizacional. Dessa forma esta pesquisa buscou analisar um modelo que explorasse essa relação entre a liderança do tipo Transformacional, a Conversão do Conhecimento (SECI) e a Eficácia Organizacional. Esta pesquisa teve o caráter quantitativo com coleta de dados através do método survey, obtendo um total de 230 respondentes válidos de diferentes organizações. O instrumento de coleta de dados foi composto por afirmativas relativas ao modelo de relação pesquisado com um total de 44 itens. O perfil de respondentes concentrou-se entre 30 e 39 anos de idade, com a predominância de organizações privadas e de departamentos de TI/Telecom, Docência e Recursos Humanos respectivamente. O tratamento dos dados foi através da Análise Fatorial Exploratória e Modelagem de Equações Estruturais via Partial Least Square Path Modeling (PLS-PM). Como resultado da análise desta pesquisa, as hipóteses puderam ser confirmadas, concluindo que a Liderança Transformacional apresenta influência positiva nos modos de Conversão do Conhecimento e que; a Conversão do Conhecimento influencia positivamente na Eficácia Organizacional. Ainda, concluiu-se que a percepção entre os respondentes não apresenta resultado diferente sobre o modelo desta pesquisa entre quem possui ou não função de liderança.