913 resultados para just in time
Resumo:
This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).
Resumo:
Les lésions de la moelle épinière ont un impact significatif sur la qualité de la vie car elles peuvent induire des déficits moteurs (paralysie) et sensoriels. Ces déficits évoluent dans le temps à mesure que le système nerveux central se réorganise, en impliquant des mécanismes physiologiques et neurochimiques encore mal connus. L'ampleur de ces déficits ainsi que le processus de réhabilitation dépendent fortement des voies anatomiques qui ont été altérées dans la moelle épinière. Il est donc crucial de pouvoir attester l'intégrité de la matière blanche après une lésion spinale et évaluer quantitativement l'état fonctionnel des neurones spinaux. Un grand intérêt de l'imagerie par résonance magnétique (IRM) est qu'elle permet d'imager de façon non invasive les propriétés fonctionnelles et anatomiques du système nerveux central. Le premier objectif de ce projet de thèse a été de développer l'IRM de diffusion afin d'évaluer l'intégrité des axones de la matière blanche après une lésion médullaire. Le deuxième objectif a été d'évaluer dans quelle mesure l'IRM fonctionnelle permet de mesurer l'activité des neurones de la moelle épinière. Bien que largement appliquées au cerveau, l'IRM de diffusion et l'IRM fonctionnelle de la moelle épinière sont plus problématiques. Les difficultés associées à l'IRM de la moelle épinière relèvent de sa fine géométrie (environ 1 cm de diamètre chez l'humain), de la présence de mouvements d'origine physiologique (cardiaques et respiratoires) et de la présence d'artefacts de susceptibilité magnétique induits par les inhomogénéités de champ, notamment au niveau des disques intervertébraux et des poumons. L'objectif principal de cette thèse a donc été de développer des méthodes permettant de contourner ces difficultés. Ce développement a notamment reposé sur l'optimisation des paramètres d'acquisition d'images anatomiques, d'images pondérées en diffusion et de données fonctionnelles chez le chat et chez l'humain sur un IRM à 3 Tesla. En outre, diverses stratégies ont été étudiées afin de corriger les distorsions d'images induites par les artefacts de susceptibilité magnétique, et une étude a été menée sur la sensibilité et la spécificité de l'IRM fonctionnelle de la moelle épinière. Les résultats de ces études démontrent la faisabilité d'acquérir des images pondérées en diffusion de haute qualité, et d'évaluer l'intégrité de voies spinales spécifiques après lésion complète et partielle. De plus, l'activité des neurones spinaux a pu être détectée par IRM fonctionnelle chez des chats anesthésiés. Bien qu'encourageants, ces résultats mettent en lumière la nécessité de développer davantage ces nouvelles techniques. L'existence d'un outil de neuroimagerie fiable et robuste, capable de confirmer les paramètres cliniques, permettrait d'améliorer le diagnostic et le pronostic chez les patients atteints de lésions médullaires. Un des enjeux majeurs serait de suivre et de valider l'effet de diverses stratégies thérapeutiques. De telles outils représentent un espoir immense pour nombre de personnes souffrant de traumatismes et de maladies neurodégénératives telles que les lésions de la moelle épinière, les tumeurs spinales, la sclérose en plaques et la sclérose latérale amyotrophique.
Resumo:
Much of the literature on ethical issues in child and youth participation has drawn on the episodic experiences of participatory research efforts in which young people’s input has been sought, transcribed and represented. This literature focuses in particular on the power dynamics and ethical dilemmas embedded in time-bound adult/child and outsider/insider relationships. While we agree that these issues are crucial and in need of further examination, it is equally important to examine the ethical issues embedded within the “everyday” practices of the organizations in and through which young people’s participation in community research and development often occurs (e.g., community-based organizations, schools and municipal agencies). Drawing on experience from three summers of work in promoting youth participation in adult-led organizations of varying purpose, scale and structure, a framework is postulated that presents participation as a spatial practice shaped by five overlapping dimensions. The framework is offered as a point of discussion and a potential tool for analysis in ecipation in relation to organizational practice.
Resumo:
This dissertation examines gendered fictional dialogue in popular works by D.H. Lawrence, Ernest Hemingway and E.M. Forster, including Howards End (1910), The Sun Also Rises (1926) and Lady Chatterley’s Lover (1928). I apply Judith Halberstam’s notion of female masculinity to direct speech, to explore how speech traits inform modernist literary aesthetics. My introduction frames this discussion in sociolinguistics, Judith Butler’s theory of performativity, M.M. Bakhtin’s discourse theory, and gender studies. It provides an opportunity to establish experimental dialogue techniques, and the manipulation of gendered talk, in transgressive texts including James Joyce’s Ulysses (1922), Virginia Woolf’s Orlando (1928) and Radcyffe Hall’s The Well of Loneliness (1928). The first chapter discusses taboos and dialect in D.H. Lawrence’s fictional dialogue. The second chapter establishes gender subversion as a crucial element in Ernest Hemingway’s dialogue style. The third chapter contrasts Forster’s latently gendered speech with his techniques of dialect emphasis and dialect suppression. Finally, my conclusion discusses gender identity in the poetry of Dorothy Parker and Baroness Elsa von Freytag Loringhoven, and the temporality of gender in “Time Passes” from Virginia Woolf’s To the Lighthouse (1927). New Woman characters like Lady Brett Ashley typified a crucial moment in women’s liberation. They not only subverted stereotypes of womanhood through their dress or sexual freedom, they also adopted/adapted masculine idiom to shock, to rebel against and challenge male dominance. Different speech acts incited fashionable slang, became a political protest symbol or inspired psychoanalytic theory. The intriguing functions of women’s masculine speech in early twentieth century fiction establishes the need to examine additional connections between gender and talk in literary studies.
Resumo:
Laser ablation of graphite has been carried out using 1.06mm radiation from a Q-switched Nd:YAG laser and the time of flight distribution of molecular C2 present in the resultant plasma is investigated in terms of distance from the target as well as laser fluences employing time resolved spectroscopic technique. At low laser fluences the intensities of the emission lines from C2 exhibit only single peak structure while beyond a threshold laser fluence, emission from C2 shows a twin peak distribution in time. The occurrence of the faster velocity component at higher laser fluences is explained as due to species generated from recombination processes while the delayed peak is attributed to dissociation of higher carbon clusters resulting in the generation of C2 molecule. Analysis of measured data provides a fairly complete picture of the evolution and dynamics of C2 species in the laser induced plasma from graphite.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
Die Automobilindustrie reagiert mit Modularisierungsstrategien auf die zunehmende Produktkomplexität, getrieben durch die wachsenden Individualisierungsanforde-rungen auf der Kundenseite und der Modellpolitik mit neuen Fahrzeuganläufen. Die Hersteller verlagern die Materialbereitstellungskomplexität durch Outsourcing an die nächste Zulieferebene, den First Tier Suppliern, die seit Beginn der 90er Jahre zunehmend in Zulieferparks in unmittelbarer Werknähe integriert werden. Typische Merkmale eines klassischen Zulieferparks sind: Bereitstellung einer Halleninfrastruktur mit Infrastrukturdienstleistungen, Anlieferung der Teileumfänge im JIS-Verfahren (Just-in-Sequence = reihenfolgegenaue Synchronisation), lokale Wertschöpfung (Vormontagen, Sequenzierung) des Zulieferers, Vertragsbindung der First Tier Zulieferer für die Dauer eines Produktlebenszyklus und Einbindung eines Logistikdienstleisters. Teilweise werden zur Finanzierung Förderprojekte des öffent-lichen Sektors initiiert. Bisher fehlte eine wissenschaftliche Bearbeitung dieses Themas "Zulieferpark". In der Arbeit werden die in Europa entstandenen Zulieferparks näher untersucht, um Vor- und Nachteile dieses Logistikkonzeptes zu dokumentieren und Entwicklungs-trends aufzuzeigen. Abgeleitet aus diesen Erkenntnissen werden Optimierungs-ansätze aufgezeigt und konkrete Entwicklungspfade zur Verbesserung der Chancen-Risikoposition der Hauptakteure Automobilhersteller, Zulieferer und Logistikdienst-leister beschrieben. Die Arbeit gliedert sich in vier Haupteile, einer differenzierten Beschreibung der Ausgangssituation und den Entwicklungstrends in der Automobilindustrie, dem Vorgehensmodell, der Dokumentation der Analyseergebnisse und der Bewertung von Zulieferparkmodellen. Im Rahmen der Ergebnisdokumentation des Analyseteils werden vier Zulieferparkmodelle in detaillierten Fallstudien anschaulich dargestellt. Zur Erarbeitung der Analyseergebnisse wurde eine Befragung der Hauptakteure mittels strukturierten Fragebögen durchgeführt. Zur Erhebung von Branchentrends und zur relativen Bewertung der Parkmodelle wurden zusätzlich Experten befragt. Zur Segmentierung der Zulieferparklandschaft wurde die Methode der Netzwerk-analyse eingesetzt. Die relative Bewertung der Nutzenposition basiert auf der Nutzwertanalyse. Als Ergebnisse der Arbeit liegen vor: · Umfassende Analyse der Zulieferparklandschaft in Europa, · Segmentierung der Parks in Zulieferparkmodelle, Optimierungsansätze zur Verbesserung einer Win-Win-Situation der beteiligten Hauptakteure, · Relative Nutzenbewertung der Zulieferparkmodelle, · Entwicklungspfade für klassische Zulieferparks.
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
Time series regression models are especially suitable in epidemiology for evaluating short-term effects of time-varying exposures on health. The problem is that potential for confounding in time series regression is very high. Thus, it is important that trend and seasonality are properly accounted for. Our paper reviews the statistical models commonly used in time-series regression methods, specially allowing for serial correlation, make them potentially useful for selected epidemiological purposes. In particular, we discuss the use of time-series regression for counts using a wide range Generalised Linear Models as well as Generalised Additive Models. In addition, recently critical points in using statistical software for GAM were stressed, and reanalyses of time series data on air pollution and health were performed in order to update already published. Applications are offered through an example on the relationship between asthma emergency admissions and photochemical air pollutants
Resumo:
A compositional time series is obtained when a compositional data vector is observed at different points in time. Inherently, then, a compositional time series is a multivariate time series with important constraints on the variables observed at any instance in time. Although this type of data frequently occurs in situations of real practical interest, a trawl through the statistical literature reveals that research in the field is very much in its infancy and that many theoretical and empirical issues still remain to be addressed. Any appropriate statistical methodology for the analysis of compositional time series must take into account the constraints which are not allowed for by the usual statistical techniques available for analysing multivariate time series. One general approach to analyzing compositional time series consists in the application of an initial transform to break the positive and unit sum constraints, followed by the analysis of the transformed time series using multivariate ARIMA models. In this paper we discuss the use of the additive log-ratio, centred log-ratio and isometric log-ratio transforms. We also present results from an empirical study designed to explore how the selection of the initial transform affects subsequent multivariate ARIMA modelling as well as the quality of the forecasts
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequent in practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of the sample, thus producing a transfer of mass from some components to other ones, but preserving the total mass present in the system. This evolution is traditionally modelled as a system of ordinary di erential equations of the mass of each component. However, this kind of evolution can be decomposed into a compositional change, expressed in terms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despite of some subcompositions behaving linearly. The goal is to study the characteristics of such simplicial systems of di erential equa- tions such as linearity and stability. This is performed extracting the compositional dif ferential equations from the mass equations. Then, simplicial derivatives are expressed in coordinates of the simplex, thus reducing the problem to the standard theory of systems of di erential equations, including stability. The characterisation of stability of these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and the associated behaviour of the orbits are the main tools. For a three component system, these orbits can be plotted both in coordinates of the simplex or in a ternary diagram. A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is a radioactive decay
Resumo:
The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification
Resumo:
Introducción: Autismo es un trastorno del desarrollo caracterizado por compromiso en interacción social, habilidades de lenguaje, presentando rituales con estereotipias. Sin tratamientos curativos, actualmente se buscan terapias alternativas. Un incremento de la literatura científica de terapias asistidas con animales se ha evidenciado, demostrando mejoría en pacientes autistas con la equinoterapia. Objetivo: Realizar una revisión sistemática de la literatura para evaluar efectividad de la equinoterapia en habilidades sociales y de lenguaje en niños autistas. Metodología: Revisión sistemática de la literatura de artículos obtenidos en bases de datos y Meta-buscadores que proporcionaron evidencia de equinoterapia en niños autistas. Tipo de artículos consultados: revisiones sistemáticas, meta análisis y ensayos clínicos. Trabajos publicados hasta 2013. En inglés y español. Se emplearon términos MeSH y EMTREE. Resultados: Cuatro artículos cumplieron criterios de inclusión y exclusión. Se analizaron los artículos individualmente, no se logró realizar un meta análisis por diferencias metodológicas entre los estudios. En total 85 sujetos fueron evaluados en dichos estudios. La equinoterapia en niños autistas evidenció mejoría en habilidades sociales y en las habilidades de lenguaje pre verbal. Discusión: La equinoterapia es prometedora en el manejo de niños autistas, los artículos evidencian consistentemente mejorías a nivel de habilidades sociales y de lenguaje. Debe ser considerado el tipo de paciente, el régimen de equinoterapia y la sostenibilidad de las mejoras. Conclusiones: Se necesitan nuevos estudio con un mayor rigor metodológico que permitan fortalecer la evidencia sobre la equinoterapia en niños con autismo y así poder realizar recomendaciones con un adecuado nivel de evidencia.
Resumo:
Con el objetivo de conocer la influencia de los diferentes niveles de apalancamiento en el crecimiento de las empresas colombianas, surge la necesidad de responder la siguiente pregunta: ¿qué determina la elección de estructura de capital de las firmas? La regresión por cuantiles permite examinar toda la distribución de las firmas y no solo una medida de la tendencia central de la distribución de la estructura de capital. De esta manera se puede evaluar la importancia relativa de las diferentes variables explicativas en diferentes puntos de la distribución del apalancamiento de las firmas. Razón por la cual se utilizará esta aproximación; sin embargo también se utilizará el método de regresión para datos de panel (también llamado datos longitudinales) con efectos aleatorios, para comparar resultados, esto teniendo en cuenta que los datos no sólo varían entre observaciones sino también en el tiempo. De esta manera, aplicar el método de regresión por cuantiles, permite darle una mirada más profunda a la elección del nivel de apalancamiento, pues permite discriminar el efecto de las variables entre firmas altamente apalancadas y bajamente apalancadas.
Resumo:
RESUMO: O conhecimento existe desde sempre, mesmo num estado latente condicionado algures e apenas à espera de um meio (de uma oportunidade) de se poder manifestar. O conhecimento é duplamente um fenómeno da consciência: porque dela procede num dado momento da sua vida e da sua história e porque só nela termina, aperfeiçoando-a e enriquecendo-a. O conhecimento está assim em constante mudança. À relativamente pouco tempo começou-se a falar de Gestão do Conhecimento e na altura foi muito associada às Tecnologias da Informação, como meio de colectar, processar e armazenar cada vez mais, maiores quantidades de informação. As Tecnologias da Informação têm tido, desde alguns anos para cá, um papel extremamente importante nas organizações, inicialmente foram adoptadas com o propósito de automatizar os processos operacionais das organizações, que suportam as suas actividades quotidianas e nestes últimos tempos as Tecnologias da Informação dentro das organizações têm evoluído rapidamente. Todo o conhecimento, mesmo até o menos relevante de uma determinada área de negócio, é fundamental para apoiar o processo de tomada de decisão. As organizações para atingirem melhores «performances» e conseguirem transcender as metas a que se propuseram inicialmente, tendem a munir-se de mais e melhores Sistemas de Informação, assim como, à utilização de várias metodologias e tecnologias hoje em dia disponíveis. Por conseguinte, nestes últimos anos, muitas organizações têm vindo a demonstrar uma necessidade crucial de integração de toda a sua informação, a qual está dispersa pelos diversos departamentos constituintes. Para que os gestores de topo (mas também para outros funcionários) possam ter disponível em tempo útil, informação pertinente, verdadeira e fiável dos negócios da organização que eles representam, precisam de ter acesso a bons Sistemas de Tecnologias de Informação. Numa acção de poderem agir mais eficazmente e eficientemente nas tomadas de decisão, por terem conseguido tirar por esses meios o máximo de proveito possível da informação, e assim, apresentarem melhores níveis de sucesso organizacionais. Também, os Sistemas de «Business Intelligence» e as Tecnologias da Informação a ele associadas, utilizam os dados existentes nas organizações para disponibilizar informação relevante para as tomadas de decisão. Mas, para poderem alcançar esses níveis tão satisfatórios, as organizações necessitam de recursos humanos, pois como podem elas serem competitivas sem Luís Miguel Borges – Gestão e Trabalhadores do Conhecimento em Tecnologias da Informação (UML) ULHT – ECATI 6 trabalhadores qualificados. Assim, surge a necessidade das organizações em recrutar os chamados hoje em dia “Trabalhadores do Conhecimento”, que são os indivíduos habilitados para interpretar as informações dentro de um domínio específico. Eles detectam problemas e identificam alternativas, com os seus conhecimentos e discernimento, eles trabalham para solucionar esses problemas, ajudando consideravelmente as organizações que representam. E, usando metodologias e tecnologias da Engenharia do Conhecimento como a modelação, criarem e gerirem um histórico de conhecimento, incluindo conhecimento tácito, sobre várias áreas de negócios da organização, que podem estar explícitos em modelos abstractos, que possam ser compreendidos e interpretados facilmente, por outros trabalhadores com níveis de competência equivalentes. ABSTRACT: Knowledge has always existed, even in a latent state conditioning somewhere and just waiting for a half (an opportunity) to be able to manifest. Knowledge is doubly a phenomenon of consciousness: because proceeds itself at one point in its life and its history and because solely itself ends, perfecting it and enriching it. The knowledge is so in constant change. In the relatively short time that it began to speak of Knowledge Management and at that time was very associated with Information Technologies, as a means to collect, process and store more and more, larger amounts of information. Information Technologies has had, from a few years back, an extremely important role in organizations, were initially adopted in order to automate the operational processes of organizations, that support their daily activities and in recent times Information Technologies within organizations has evolved rapidly. All the knowledge, even to the least relevant to a particular business area, is fundamental to support the process of decision making. The organizations to achieve better performances and to transcend the goals that were initially propose, tend to provide itself with more and better Information Systems, as well as, the use of various methodologies and technologies available today. Consequently, in recent years, many organizations have demonstrated a crucial need for integrating all their information, which is dispersed by the diver constituents departments. For top managers (but also for other employees) may have ready in time, pertinent, truthful and reliable information of the organization they represent, need access to good Information Technology Systems. In an action that they can act more effectively and efficiently in decision making, for having managed to get through these means the maximum possible advantage of the information, and so, present better levels of organizational success. Also, the systems of Business Intelligence and Information Technologies its associated, use existing data on organizations to provide relevant information for decision making. But, in order to achieve these levels as satisfactory, organizations need human resources, because how can they be competitive without skilled workers. Thus, arises the need for organizations to recruit called today “Knowledge Workers”, they are the individuals enable to interpret the information within a specific domain. They detect problems and identify alternatives, with their knowledge and discernment they work to solve these problems, helping considerably the organizations that represent. And, using Luís Miguel Borges – Gestão e Trabalhadores do Conhecimento em Tecnologias da Informação (UML) ULHT – ECATI 8 methodologies and technologies of Knowledge Engineering as modeling, create and manage a history of knowledge, including tacit knowledge, on various business areas of the organization, that can be explicit in the abstract models, that can be understood and interpreted easily, by other workers with equivalent levels of competence.