919 resultados para Model knowledge conversion of Nonaka


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper I present an endogenous growth model where the engine of growth is in-house R&D performed by high-tech firms. I model knowledge (patent) licensing among high-tech firms. I show that if there is knowledge licensing, high-tech firms innovate more and economic growth is higher than in cases when there are knowledge spillovers or there is no exchange of knowledge among high-tech firms. However, in case when there is knowledge licensing the number of high-tech firms is lower than in cases when there are knowledge spillovers or there is no exchange of knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article aims to analyze the process of knowledge creation in Brazilian technology-based companies, using as a background the driving and restrictive factors found in this process. As the pillars of discussion, four main modes of knowledge conversion were used, according to the Japanese model: socialization, externalization, combination and internalization. The comparative case method through qualitative research was carried out in nine technology-based enterprises that had been incubated or have recently passed through the stage of incubation (so-called graduated companies) in the Technology Park of Sao Carlos, state of Sao Paulo, Brazil. Among the main results, the combination of knowledge was identified as more conscious and structured in graduated companies, in relation to incubated companies. In contrast, it was noted that incubated companies have an environment with greater opportunities for socialization, internalization and externalization of knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge management (KM) is an emerging discipline (Ives, Torrey & Gordon, 1997) and characterised by four processes: generation, codification, transfer, and application (Alavi & Leidner, 2001). Completing the loop, knowledge transfer is regarded as a precursor to knowledge creation (Nonaka & Takeuchi, 1995) and thus forms an essential part of the knowledge management process. The understanding of how knowledge is transferred is very important for explaining the evolution and change in institutions, organisations, technology, and economy. However, knowledge transfer is often found to be laborious, time consuming, complicated, and difficult to understand (Huber, 2001; Szulanski, 2000). It has received negligible systematic attention (Huber, 2001; Szulanski, 2000), thus we know little about it (Huber, 2001). However, some literature, such as Davenport and Prusak (1998) and Shariq (1999), has attempted to address knowledge transfer within an organisation, but studies on inter-organisational knowledge transfer are still much neglected. An emergent view is that it may be beneficial for organisations if more research can be done to help them understand and, thus, to improve their inter-organisational knowledge transfer process. Therefore, this article aims to provide an overview of the inter-organisational knowledge transfer and its related literature and present a proposed inter-organisational knowledge transfer process model based on theoretical and empirical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research is to investigate how risk management in a healthcare organisation can be supported by knowledge management. The subject of research is the development and management of existing logs called "risk registers", through specific risk management processes employed in a N.H.S. (Foundation) Trust in England, in the U.K. Existing literature on organisational risk management stresses the importance of knowledge for the effective implementation of risk management programmes, claiming that knowledge used to perceive risk is biased by the beliefs of individuals and groups involved in risk management and therefore is considered incomplete. Further, literature on organisational knowledge management presents several definitions and categorisations of knowledge and approaches for knowledge manipulation in the organisational context as a whole. However, there is no specific approach regarding "how to deal" with knowledge in the course of organisational risk management. The research is based on a single case study, on a N.H.S. (Foundation) Trust, is influenced by principles of interpretivism and the frame of mind of Soft Systems Methodology (S.S.M.) to investigate the management of risk registers, from the viewpoint of people involved in the situation. Data revealed that knowledge about risks and about the existing risk management policy and procedures is situated in several locations in the Trust and is neither consolidated nor present where and when required. This study proposes a framework that identifies required knowledge for each of the risk management processes and outlines methods for conversion of this knowledge, based on the SECI knowledge conversion model, and activities to facilitate knowledge conversion so that knowledge is effectively used for the development of risk registers and the monitoring of risks throughout the whole Trust under study. This study has theoretical impact in the management science literature as it addresses the issue of incomplete knowledge raised in the risk management literature using concepts of the knowledge management literature, such as the knowledge conversion model. In essence, the combination of required risk and risk management related knowledge with the required type of communication for risk management creates the proposed methods for the support of each risk management process for the risk registers. Further, the indication of the importance of knowledge in risk management and the presentation of a framework that consolidates knowledge required for the risk management processes and proposes way(s) for the communication of this knowledge within a healthcare organisation have practical impact in the management of healthcare organisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a model for measuring personal knowledge development in online learning environments. It is based on Nonaka‘s SECI model of organisational knowledge creation. It is argued that Socialisation is not a relevant mode in the context of online learning and was therefore not covered in the measurement instrument. Therefore, the remaining three of SECI‘s knowledge conversion modes, namely Externalisation, Combination, and Internalisation were used and a measurement instrument was created which also examines the interrelationships between the three modes. Data was collected using an online survey, in which online learners report on their experiences of personal knowledge development in online learning environments. In other words, the instrument measures the magnitude of online learners‘ Externalisation and combination activities as well as their level of internalisation, which is the outcome of their personal knowledge development in online learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis introduces a novel conceptual framework to support the creation of knowledge representations based on enriched Semantic Vectors, using the classical vector space model approach extended with ontological support. One of the primary research challenges addressed here relates to the process of formalization and representation of document contents, where most existing approaches are limited and only take into account the explicit, word-based information in the document. This research explores how traditional knowledge representations can be enriched through incorporation of implicit information derived from the complex relationships (semantic associations) modelled by domain ontologies with the addition of information presented in documents. The relevant achievements pursued by this thesis are the following: (i) conceptualization of a model that enables the semantic enrichment of knowledge sources supported by domain experts; (ii) development of a method for extending the traditional vector space, using domain ontologies; (iii) development of a method to support ontology learning, based on the discovery of new ontological relations expressed in non-structured information sources; (iv) development of a process to evaluate the semantic enrichment; (v) implementation of a proof-of-concept, named SENSE (Semantic Enrichment kNowledge SourcEs), which enables to validate the ideas established under the scope of this thesis; (vi) publication of several scientific articles and the support to 4 master dissertations carried out by the department of Electrical and Computer Engineering from FCT/UNL. It is worth mentioning that the work developed under the semantic referential covered by this thesis has reused relevant achievements within the scope of research European projects, in order to address approaches which are considered scientifically sound and coherent and avoid “reinventing the wheel”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Entre los factores que contribuyen a predecir el rendimiento académico se pueden destacar aquellos que reflejan capacidades cognitivas (inteligencia, por ejemplo), y aquellas diferencias individuales consideradas como no-cognitivas (rasgos de personalidad, por ejemplo). En los últimos años, también se considera al Conocimiento General (CG) como un criterio para el éxito académico (ver Ackerman, 1997), ya que se ha evidenciado que el conocimiento previo ayuda en la adquisición de nuevo conocimiento (Hambrick & Engle, 2001). Uno de los objetivos de la psicología educacional consiste en identificar las principales variables que explican el rendimiento académico, como también proponer modelos teóricos que expliquen las relaciones existentes entre estas variables. El modelo teórico PPIK (Inteligencia-como-Proceso, Personalidad, Intereses e Inteligencia-como-Conocimiento) propuesto por Ackerman (1996) propone que el conocimiento y las destrezas adquiridas en un dominio en particular son el resultado de la dedicación de recursos cognitivos que una persona realiza durante un prolongado período de tiempo. Este modelo propone que los rasgos de personalidad, intereses individuales/vocacionales y aspectos motivacionales están integrados como rasgos complejos que determinan la dirección y la intensidad de la dedicación de recursos cognitivos sobre el aprendizaje que realiza una persona (Ackerman, 2003). En nuestro medio (Córdoba, Argentina), un grupo de investigadores ha desarrollado una serie de recursos técnicos necesarios para la evaluación de algunos de los constructos propuesto por este modelo. Sin embargo, por el momento no contamos con una medida de Conocimiento General. Por lo tanto, en el presente proyecto se propone la construcción de un instrumento para medir Conocimiento General (CG), indispensable para poder contar con una herramienta que permita establecer parámetros sobre el nivel de conocimiento de la población universitaria y para en próximos trabajos poner a prueba los postulados de la teoría PPIK (Ackerman, 1996). Between the factors that contribute to predict the academic achievement, may be featured those who reflect cognitive capacities (i.g. intelligence) and those who reflect individual differences that are considered like non-cognitive (i.g. personality traits). In the last years, also the General Knowledge has been considered like a criterion for the academic successfully (see Ackerman, 1997), since it has been shown that the previous knowledge helps in the acquisition of the new knowledge (Hambrick & Engle, 2001). An interesting theoretical model that has proposed an explanation for the academic achievement, is the PPIK (intelligence like a process, interests and inteligence like knowledge) proposed by Ackerman (1996), who argues that knowledge and the acquired skills in a particular domain are the result of the dedication of cognitive resources that a person perform during a long period of time. This model proposes that personality traits, individuals interests and motivational aspects are integrated as complex traits that determine the direction and the intensity of the dedication of cognitive resources on the learning that a person make (Ackerman, 2003). In our context, (Córdoba, Argentina), a group of researcher has developed a series of necessary technical resoures for the assesment of some of the theoretical constructs proposed by this model. However, by the moment, we do not have an instrument for evaluate the General Knowledge. Therefore, this project aims the construction of an instrument to asess General Knowledge, essential to set parameters on the knowledge level of the university population and for in next works test the PPIK theory postulates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Summary The transcription factor and proto-oncogene c-myc plays an important role in integrating many mitogenic signals within the cell. The consequences are both broad and varied and include the regulation of apoptosis, cellular differentiation, cellular growth and cell cycle progression. It is found to be mis-regulated in over 70% of all cancers, however, our knowledge about c-Myc remains limited and very little is known about its physiological role in mammalian development and in adulthood. We have addressed the physiological role of c-Myc in both the bone marrow and the liver of mice by generating adult c-myc flox/flox mice that lacked c-myc in either the bone marrow or the liver after conversion of the c-myc flox alleles into null alleles by the inducible Mx¬Cre transgene with polyI-polyC. In investigating the role of c-Myc in the haematopoietic system, we concentrated on the aspects of cellular proliferation, cellular differentiation and apoptosis. Mice lacking c-Myc develop anaemia between 3-8 weeks and all more differentiated cell types are severely depleted leading to death. However in addition to its role in driving proliferation in transient amplifying cells, we unexpectedly discovered a new role for c-Myc in controlling haematopoietic stem cell (HSC) differentiation. c-Myc deficient HSCs are able to proliferate normally in vivo. In addition, their differentiation into more committed progenitors is blocked. These cells expressed increased adhesion molecules, which possibly prevent HSCs from being released from the special stem cell supporting stromal niche cells with which they closely associate. Secondly we used the liver as a model system to address the role of c-Myc in cellular growth, meaning the increase in cell size, and also cellular proliferation. Our results revealed c-Myc to play no role in metabolic cellular growth following a period of fasting. Following treatment with the xenobiotic TCPOBOP, c-Myc deficient hepatocytes increased in cell size as control hepatocytes and could surprisingly proliferate albeit at a reduced rate demonstrating a c-Myc independent proliferation pathway to exist in parenchymal cells. However, following partial hepatectomy, in which two-thirds of the liver was removed, mutant livers were severely restricted in their regeneration capacity compared to control livers demonstrating that c-Myc is essential for liver regeneration. Résumé Le facteur de transcription et proto-oncogène c-myc joue un rôle important dans l'intégration de nombreux signaux mitogéniques dans la cellule. Les conséquences de son activation sont étendues et variées et incluent la régulation de l'apoptose, de la différenciation, de la croissance et de la progression du cycle cellulaire. Même si plus de 20% des cancers montrent une dérégulation de c-myc, les connaissances sur ce facteur de transcription restent limitées et ses rôles physiologiques au cours du développement et chez l'adulte sont très peu connus. Nous avons étudié le rôle physiologique de c-Myc dans la molle osseuse et le foie murin en générant des souris adultes c-myc flox/flox. Dans ces souris, les allèles c-myc flox sont convertis en allèles nuls par le transgène Mx-Cre après induction avec du Poly-I.C. Pour notre étude du rôle de c-Myc dans le système hématopoiétique, nous nous sommes concentrés sur les aspects de la prolifération et de la différenciation cellulaire, ainsi que sur l'apoptose. Les souris déficientes pour c-Myc développent une anémie 3 à 8 semaines après la délétion du gène; tous les différents types cellulaires matures sont progressivement épuisés ce qui entraîne la mort des animaux. Néanmoins, outre sa capacité à induire la prolifération des cellules transitoires de la molle osseuse, nous avons inopinément découvert un nouveau rôle pour c-Myc dans le contrôle de la différenciation des cellules souches hématopoiétiques (HSC). Les HSC déficientes pour c-Myc prolifèrent normalement in vivo mais leur différenciation en progéniteurs plus engagés dans une voie de différenciation est bloquée. Ces cellules surexpriment certaines molécules d'adhésion ce qui empêcherait les HSC d'être relachées du stroma spécialisé, ou niche, auquel elles sont étroitement associées. D'autre part, nous avons utilisé le foie comme système modèle pour étudier le rôle de c-Myc dans la prolifération et dans la croissance cellulaire, c'est à dire l'augmentation de taille des cellules. Nos résultats ont révélé que c-Myc ne joue pas de rôle dans le métabolisme cellulaire qui suit une période de jeûne. L'augmentation de la taille cellulaire des hépatocytes déficients pour c-Myc suite au traitement avec l'agent xénobiotique TCPOBOP est identique à celle observée pour les cellules de contrôle. Le taux de prolifération des hépatocytes mutants est par contre réduit, indiquant qu'une voie de différenciation indépendante de c-Myc existe dans les cellules parenchymales. Néanmoins, après hépatectomie partielle, où deux-tiers du foie sont éliminés chirurgicalement, les foies mutants sont sévèrement limités dans leur capacité de régénération par rapport aux foies de contrôle, montrant ainsi que c-Myc est essentiel pour la régénération hépatique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intensification of agricultural production without a sound management and regulations can lead to severe environmental problems, as in Western Santa Catarina State, Brazil, where intensive swine production has caused large accumulations of manure and consequently water pollution. Natural resource scientists are asked by decision-makers for advice on management and regulatory decisions. Distributed environmental models are useful tools, since they can be used to explore consequences of various management practices. However, in many areas of the world, quantitative data for model calibration and validation are lacking. The data-intensive distributed environmental model AgNPS was applied in a data-poor environment, the upper catchment (2,520 ha) of the Ariranhazinho River, near the city of Seara, in Santa Catarina State. Steps included data preparation, cell size selection, sensitivity analysis, model calibration and application to different management scenarios. The model was calibrated based on a best guess for model parameters and on a pragmatic sensitivity analysis. The parameters were adjusted to match model outputs (runoff volume, peak runoff rate and sediment concentration) closely with the sparse observed data. A modelling grid cell resolution of 150 m adduced appropriate and computer-fit results. The rainfall runoff response of the AgNPS model was calibrated using three separate rainfall ranges (< 25, 25-60, > 60 mm). Predicted sediment concentrations were consistently six to ten times higher than observed, probably due to sediment trapping along vegetated channel banks. Predicted N and P concentrations in stream water ranged from just below to well above regulatory norms. Expert knowledge of the area, in addition to experience reported in the literature, was able to compensate in part for limited calibration data. Several scenarios (actual, recommended and excessive manure applications, and point source pollution from swine operations) could be compared by the model, using a relative ranking rather than quantitative predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.