942 resultados para Random matrix
Resumo:
Bidirectional exchange of information between the cancer cells and their environment is essential for cancer to evolve. Cancer cells lose the ability to regulate their growth, gain the ability to detach from neighboring cells and finally some of the cells disseminate from the primary tumor and invade to the adjacent tissue. During cancer progression, cells acquire features that promote cancer motility and proliferation one of them being increased filopodia number. Filopodia are dynamic actin-rich structures extending from the leading edge of migrating cells and the main function of these structures is to serve as environmental sensors. It is nowadays widely appreciated, that not only the cancer cells, but also the surrounding of the tumor – the tumor microenvironment- contribute to cancer cell dissemination and tumor growth. Activated stromal fibroblasts, also known as cancer-associated fibroblasts (CAFs) actively participate on tumor progression. CAFs are the most abundant cell type surrounding the cancer cells and they are the main cell type producing the extracellular matrix (ECM) within tumor stroma. CAFs secrete growth factors to promote tumor growth, direct cancer cell invasion as well as modify the stromal ECM architecture. The aim of this thesis was to investigate the function of filopodia, particularly the role of filopodia-inducing protein Myosin-X (Myo10), in breast cancer cell invasion and metastasis. We found that Myo10 is an important regulator of basal type breast cancer spreading downstream of mutant p53. In addition, I investigated the role of CAFs and their secreted matrix on tumor growth. According to the results, CAF-derived matrix has altered organization and stiffness which induces the carcinoma cell proliferation via epigenetic mechanisms. I identified histone demethylase enzyme JMJD1a to be regulated by the stiffness and to participate in stiffness induced growth control.
Resumo:
The successful performance of company in the market relates to the quality management of human capital aiming to improve the company's internal performance and external implementation of the core business strategy. Companies with matrix structure focusing on realization and development of innovation and technologies for the uncertain market need to select thoroughly the approach to HR management system. Human resource management has a significant impact on the organization and use a variety of instruments such as corporate information systems to fulfill their functions and objectives. There are three approaches to strategic control management depending on major impact on the major interference in employee decision-making, development of skills and his integration into the business strategy. The mainstream research has focus only on the framework of strategic planning of HR and general productivity of firm, but not on features of organizational structure and corporate software capabilities for human capital. This study tackles the before mentioned challenges, typical for matrix organization, by using the HR control management tools and corporate information system. The detailed analysis of industry producing and selling electromotor and heating equipment in this master thesis provides the opportunity to improve system for HR control and displays its application in the ERP software. The results emphasize the sustainable role of matrix HR input control for creating of independent project teams for matrix structure who are able to respond to various uncertainties of the market and use their skills for improving performance. Corporate information systems can be integrated into input control system by means of output monitoring to regulate and evaluate the processes of teams, using key performance indicators and reporting systems.
Resumo:
My research permitted me to reexamine my recent evaluations of the Leaf Project given to the Foundation Year students during the fall semester of 1997. My personal description of the drawing curriculum formed part of the matrix of the Foundation Core Studies at the Ontario College of Art and Design. Research was based on the random selection of 1 8 students distributed over six of my teaching groups. The entire process included a representation of all grade levels. The intent of the research was to provide a pattern of alternative insights that could provide a more meaningful method of evaluation for visual learners in an art education setting. Visual methods of learning are indeed complex and involve the interplay of many sensory modalities of input. Using a qualitative method of research analysis, a series of queries were proposed into a structured matrix grid for seeking out possible and emerging patterns of learning. The grid provided for interrelated visual and linguistic analysis with emphasis in reflection and interconnectedness. Sensory-based modes of learning are currently being studied and discussed amongst educators as alternative approaches to learning. As patterns emerged from the research, it became apparent that a paradigm for evaluation would have to be a progressive profile of the learning that would take into account many of the different and evolving learning processes of the individual. A broader review of the student's entire development within the Foundation Year Program would have to have a shared evaluation through a cross section of representative faculty in the program. The results from the research were never intended to be conclusive. We realized from the start that sensory-based learning is a difficult process to evaluate from traditional standards used in education. The potential of such a process of inquiry permits the researcher to ask for a set of queries that might provide for a deeper form of evaluation unique to the students and their related learning environment. Only in this context can qualitative methods be used to profile their learning experiences in an expressive and meaningful manner.
Resumo:
Exch~nge energy of the He-He system is calculated using the one-density matrix which has been modified according to the supermolecular density formula quoted by Kolos. The exchange energy integrals are computed analytically and by the Monte Carlo method. The results obtained from both ways compared favourably,with the results obtained from the SCF program HONDO
Resumo:
Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.
Resumo:
Tesis (Maestría en Artes con Especialidad en Educación) U.A.N.L.
Resumo:
In this paper, we consider testing marginal normal distributional assumptions. More precisely, we propose tests based on moment conditions implied by normality. These moment conditions are known as the Stein (1972) equations. They coincide with the first class of moment conditions derived by Hansen and Scheinkman (1995) when the random variable of interest is a scalar diffusion. Among other examples, Stein equation implies that the mean of Hermite polynomials is zero. The GMM approach we adopted is well suited for two reasons. It allows us to study in detail the parameter uncertainty problem, i.e., when the tests depend on unknown parameters that have to be estimated. In particular, we characterize the moment conditions that are robust against parameter uncertainty and show that Hermite polynomials are special examples. This is the main contribution of the paper. The second reason for using GMM is that our tests are also valid for time series. In this case, we adopt a Heteroskedastic-Autocorrelation-Consistent approach to estimate the weighting matrix when the dependence of the data is unspecified. We also make a theoretical comparison of our tests with Jarque and Bera (1980) and OPG regression tests of Davidson and MacKinnon (1993). Finite sample properties of our tests are derived through a comprehensive Monte Carlo study. Finally, three applications to GARCH and realized volatility models are presented.
Resumo:
This paper presents a new theory of random consumer demand. The primitive is a collection of probability distributions, rather than a binary preference. Various assumptions constrain these distributions, including analogues of common assumptions about preferences such as transitivity, monotonicity and convexity. Two results establish a complete representation of theoretically consistent random demand. The purpose of this theory of random consumer demand is application to empirical consumer demand problems. To this end, the theory has several desirable properties. It is intrinsically stochastic, so the econometrician can apply it directly without adding extrinsic randomness in the form of residuals. Random demand is parsimoniously represented by a single function on the consumption set. Finally, we have a practical method for statistical inference based on the theory, described in McCausland (2004), a companion paper.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
Affiliation: Département de Médecine, Faculté de médecine, Université de Montréal & Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CHUM), Hôpital Notre-Dame du CHUM
Resumo:
On étudie l’application des algorithmes de décomposition matricielles tel que la Factorisation Matricielle Non-négative (FMN), aux représentations fréquentielles de signaux audio musicaux. Ces algorithmes, dirigés par une fonction d’erreur de reconstruction, apprennent un ensemble de fonctions de base et un ensemble de coef- ficients correspondants qui approximent le signal d’entrée. On compare l’utilisation de trois fonctions d’erreur de reconstruction quand la FMN est appliquée à des gammes monophoniques et harmonisées: moindre carré, divergence Kullback-Leibler, et une mesure de divergence dépendente de la phase, introduite récemment. Des nouvelles méthodes pour interpréter les décompositions résultantes sont présentées et sont comparées aux méthodes utilisées précédemment qui nécessitent des connaissances du domaine acoustique. Finalement, on analyse la capacité de généralisation des fonctions de bases apprises par rapport à trois paramètres musicaux: l’amplitude, la durée et le type d’instrument. Pour ce faire, on introduit deux algorithmes d’étiquetage des fonctions de bases qui performent mieux que l’approche précédente dans la majorité de nos tests, la tâche d’instrument avec audio monophonique étant la seule exception importante.
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
Les travaux de recherche présentés ici avaient pour objectif principal la synthèse de copolymères statistiques à base d’éthylène et d’acide acrylique (AA). Pour cela, la déprotection des groupements esters d’un copolymère statistique précurseur, le poly(éthylène-co-(tert-butyl)acrylate), a été effectuée par hydrolyse à l’aide d’iodure de triméthylsilyle. La synthèse de ce précurseur est réalisée par polymérisation catalytique en présence d’un système à base de Palladium (Pd). Le deuxième objectif a été d’étudier et de caractériser des polymères synthétisés à l’état solide et en suspension colloïdale. Plusieurs copolymères précurseurs comprenant différents pourcentages molaires en tert-butyl acrylate (4 à 12% molaires) ont été synthétisés avec succès, puis déprotégés par hydrolyse pour obtenir des poly(éthylène-coacide acrylique) (pE-co-AA) avec différentes compositions. Seuls les copolymères comprenant 10% molaire ou plus de AA sont solubles dans le Tétrahydrofurane (THF) et uniquement dans ce solvant. De telles solutions peuvent être dialysées dans l’eau, ce qui conduit à un échange lent entre cette dernière et le THF, et l’autoassemblage du copolymère dans l’eau peut ensuite être étudié. C’est ainsi qu’ont pu être observées des nanoparticules stables dans le temps dont le comportement est sensible au pH et à la température. Les polymères synthétisés ont été caractérisés par Résonance Magnétique Nucléaire (RMN) ainsi que par spectroscopie Infra-Rouge (IR), avant et après déprotection. Les pourcentages molaires d’AA ont été déterminés par combinaison des résultats de RMN et ii de titrages conductimètriques. A l’état solide, les échantillons ont été analysés par Calorimétrie différentielle à balayage (DSC) et par Diffraction des rayons X. Les solutions colloïdales des polymères pE-co-AA ont été caractérisées par Diffusion dynamique de la lumière et par la DSC-haute sensibilité. De la microscopie électronique à transmission (TEM) a permis de visualiser la forme et la taille des nanoparticules.