870 resultados para Agent-Based Models
Resumo:
Due to the different dynamics required for organizations to serve the emerging market which contains billions of people at the bottom of the pyramid (BOP) coupled with the increasing desire for organizations to grow and be more multinational, organizations need to continually innovate. However, the tendency for large and established companies to ignore the BOP market and rather focus on existing markets, gives an indication of the existence of a vulnerability that potentially disruptive innovations from the BOP will not be recognized in good time for a counter measure. This can be deduced from the fact that good management practice advocates that managers should learn and listen to their customers. Therefore majority of the large existing companies continually focus on their main customer/market with sustaining innovations which leaves aspiring new entrants with an underserved BOP market to experiment with. With the aid of research interviews and an agent-based model (ABM) simulation, this thesis examines the attributes of BOP innovations that can qualify them as disruptive and the possibilities of tangible disruptive innovations arising from the bottom of the pyramid and their underlying drivers. The thesis Furthermore, examines the associated impact of such innovations on the future sustainability of established large companies that are operating in the developed world, particularly those with a primary focus which is targeted towards the market at the top of the pyramid (TOP). Additionally, with the use of a scenario planning model, the research provides an evaluation of the possible evolution and potential sustainability impacts that could emerge, from the interplay of innovations at the two pyramidal market levels and the chosen market focus of organizations – TOP or BOP. Using four scenario quadrants, the thesis demonstrates the resulting possibilities from the interaction between the rate of innovations and the segment focused on by organizations with disruptive era characterizing the paradigm shift quadrant. Furthermore, a mathematical model and two theoretical propositions are developed for further research. As recommendations, the thesis also extends the ambidextrous organizational theory, business model innovation and portfolio diversification as plausible recommendations to limit a catastrophic impact, resulting from disruptive innovations.
Resumo:
Operating in business-to-business markets requires an in-depth understanding on business networks. Actions and reactions made to compete in markets are fundamentally based on managers‘ subjective perceptions of the network. However, an amalgamation of these individual perceptions, termed a network picture, to a common company level shared understanding on that network, known as network insight, is found to be a substantial challenge for companies. A company‘s capability to enhance common network insight is even argued to lead competitive advantage. Especially companies with value creating logics that require wide comprehension of and collaborating in networks, such as solution business, are necessitated to develop advanced network insight. According to the extant literature, dispersed pieces of atomized network pictures can be unified to a common network insight through a process of amalgamation that comprises barriers/drivers of multilateral exchange, manifold rationality, and recursive time. However, the extant body of literature appears to lack an understanding on the role of internal communication in the development of network insight. Nonetheless, the extant understanding on the amalgamation process indicates that internal communication plays a substantial role in the development of company level network insight. The purpose of the present thesis is to enhance understanding on internal communication in the amalgamation of network pictures to develop network insight in the solution business setting, which was chosen to represent business-to-business value creating logic that emphasizes the capability to understand and utilize networks. Thus, in solution business the role of succeeding in the amalgamation process is expected to emphasize. The study combines qualitative and quantitative research by means of various analytical methods including multiple case analysis, simulation, and social network analysis. Approaching the nascent research topic with differing perspectives and means provides a broader insight on the phenomenon. The study provides empirical evidence from Finnish business-to-business companies which operate globally. The empirical data comprise interviews (n=28) with managers of three case companies. In addition the data includes a questionnaire (n=23) collected mainly for the purpose of social network analysis. In addition, the thesis includes a simulation study more specifically achieved by means of agent based modeling. The findings of the thesis shed light on the role of internal communication in the amalgamation process, contributing to the emergent discussion of network insights and thus to the industrial marketing research. In addition, the thesis increases understanding on internal communication in the change process to solution business, a supplier‘s internal communication in its matrix organization structure during a project sales process, key barriers and drivers that influence internal communication in project sales networks, perceived power within industrial project sales, and the revisioning of network pictures. According to the findings, internal communication is found to play a substantial role in the amalgamation process. First, it is suggested that internal communication is a base of multilateral exchange. Second, it is suggested that internal communication intensifies and maintains manifold rationality. Third, internal communication is needed to explicate the usually differing time perspectives of others and thus it is suggested that internal communication has role as the explicator of recursive time. Furthermore, the role of an efficient amalgamation process is found to be emphasized in solutions business as it requires a more advanced network insight for cross-functional collaboration. Finally, the thesis offers several managerial implications for industrial suppliers to enhance the amalgamation process when operating in solution business.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Työn tavoitteena oli vastata ensisijaisesti kysymykseen, voidaanko projektiliiketoiminnan kassavirtoja ennustaa 3-15 kuukauden aikavälillä ja jos voidaan, niin miten ja millä tarkkuudella. Tutkimus toteutettiin teoriatutkimuksena aihepiiristä ja tutkimuksen pohjalta luotiin malli kassavirtojen ennustamiseen kohdeyritykselle 3-15 kuukauden aikavälille. Mallin laatimiseksi oli hyödynnettävissä viiden vuoden aineistot kohdeyrityksen kassavirroista, budjetista ja liiketoiminnan toteumatiedoista. Työn teoriaosiossa tutkittiin kirjallisuuden pohjalta projektiliiketoimintaa, budjetointia sekä kassavirtoja ja niiden ennustamista. Tämän jälkeen teorian pohjalta rakennettiin kohdeyritykselle historiatietoihin perustuva malli kassavirtojen ennustamiseksi. Mallia rakennettaessa määritettiin ensimmäiseksi merkittävimmät kassavirran komponentit, minkä jälkeen niille laadittiin ennustemenetelmät. Samalla arvioitiin millä tarkkuudella projektilähtöisen liiketoiminnan kassavirtoja pystytään ennustamaan. Tutkimuksen tuloksena oli historiatietoihin pohjautuva ennustemalli kohdeyritykselle. Mallilla tehtyjen testien pohjalta voitiin todeta, että projektilähtöisen liiketoiminnan kassavirtoja pystytään ennustamaan melko hyvällä tarkkuudella, ennustaminen ei kuitenkaan ole niin luotettavaa, kuin jos ennustettaisiin tasaisemmin kehittyvän liiketoiminnan kassavirtoja. Historiaan pohjautuvaa mallia käytettäessä pitää myös muistaa, että mikään ei takaa historian toistumista tulevaisuudessa.
Resumo:
This thesis studies the development of service offering model that creates added-value for customers in the field of logistics services. The study focusses on offering classification and structures of model. The purpose of model is to provide value-added solutions for customers and enable superior service experience. The aim of thesis is to define what customers expect from logistics solution provider and what value customers appreciate so greatly that they could invest in value-added services. Value propositions, costs structures of offerings and appropriate pricing methods are studied. First, literature review of creating solution business model and customer value is conducted. Customer value is found out with customer interviews and qualitative empiric data is used. To exploit expertise knowledge of logistics, innovation workshop tool is utilized. Customers and experts are involved in the design process of model. As a result of thesis, three-level value-added service offering model is created based on empiric and theoretical data. Offerings with value propositions are proposed and the level of model reflects the deepness of customer-provider relationship and the amount of added value. Performance efficiency improvements and cost savings create the most added value for customers. Value-based pricing methods, such as performance-based models are suggested to apply. Results indicate the interest of benefitting networks and partnership in field of logistics services. Networks development is proposed to be investigated further.
Resumo:
In recent decades, business intelligence (BI) has gained momentum in real-world practice. At the same time, business intelligence has evolved as an important research subject of Information Systems (IS) within the decision support domain. Today’s growing competitive pressure in business has led to increased needs for real-time analytics, i.e., so called real-time BI or operational BI. This is especially true with respect to the electricity production, transmission, distribution, and retail business since the law of physics determines that electricity as a commodity is nearly impossible to be stored economically, and therefore demand-supply needs to be constantly in balance. The current power sector is subject to complex changes, innovation opportunities, and technical and regulatory constraints. These range from low carbon transition, renewable energy sources (RES) development, market design to new technologies (e.g., smart metering, smart grids, electric vehicles, etc.), and new independent power producers (e.g., commercial buildings or households with rooftop solar panel installments, a.k.a. Distributed Generation). Among them, the ongoing deployment of Advanced Metering Infrastructure (AMI) has profound impacts on the electricity retail market. From the view point of BI research, the AMI is enabling real-time or near real-time analytics in the electricity retail business. Following Design Science Research (DSR) paradigm in the IS field, this research presents four aspects of BI for efficient pricing in a competitive electricity retail market: (i) visual data-mining based descriptive analytics, namely electricity consumption profiling, for pricing decision-making support; (ii) real-time BI enterprise architecture for enhancing management’s capacity on real-time decision-making; (iii) prescriptive analytics through agent-based modeling for price-responsive demand simulation; (iv) visual data-mining application for electricity distribution benchmarking. Even though this study is from the perspective of the European electricity industry, particularly focused on Finland and Estonia, the BI approaches investigated can: (i) provide managerial implications to support the utility’s pricing decision-making; (ii) add empirical knowledge to the landscape of BI research; (iii) be transferred to a wide body of practice in the power sector and BI research community.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
The last two decades have provided a vast opportunity to live and explore the compulsive imaginary world or virtual world through massively multiplayer online role-playing games (MMORPGs). MMORPG gives a wide range of opportunities to its users to participate with multi-players on the same platform, to communicate and to do real time actions. There is a virtual economy in these games which is largely player-driven. In-game currency provides its users to build up their Avatars, to buy or sell the necessary goods to play, survive in the games and so on. As a part of virtual economies generated through EVE Online, this thesis mainly focuses on how the prices of the minerals in EVE Online behave by applying the Jabłonska- Capasso-Morale (JCM) mathematical simulation model. It is to verify up to what degree the model can reproduce the virtual economy behavior. The model is applied to buy and sell prices of two minerals namely, isogen and morphite. The simulation results demonstrate that JCM model ts reasonably well to the mineral prices, which lets us conclude that virtual economies behave similarly to the real ones.
Resumo:
The aim of this research is to develop a tool that could allow to organize coopetitional relationships between organizations on the basis of two-sided Internet platform. The main result of current master thesis is a detailed description of the concept of the lead generating internet platform-based coopetition. With the tools of agent-based modelling and simulation, there were obtained results that could be used as a base for suggestion that the developed concept is able to cause a positive effect on some particular industries (e.g. web-design studios market) and potentially can bring some benefits and extra profitability for most companies that operate on this particular industry. Also on the basis of the results it can be assumed that the developed instrument is also able to increase the degree of transparency of the market to which it is applied.
Resumo:
Basic relationships between certain regions of space are formulated in natural language in everyday situations. For example, a customer specifies the outline of his future home to the architect by indicating which rooms should be close to each other. Qualitative spatial reasoning as an area of artificial intelligence tries to develop a theory of space based on similar notions. In formal ontology and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. We shall introduce abstract relation algebras and present their structural properties as well as their connection to algebras of binary relations. This will be followed by details of the expressiveness of algebras of relations for region based models. Mereotopology has been the main basis for most region based theories of space. Since its earliest inception many theories have been proposed for mereotopology in artificial intelligence among which Region Connection Calculus is most prominent. The expressiveness of the region connection calculus in relational logic is far greater than its original eight base relations might suggest. In the thesis we formulate ways to automatically generate representable relation algebras using spatial data based on region connection calculus. The generation of new algebras is a two pronged approach involving splitting of existing relations to form new algebras and refinement of such newly generated algebras. We present an implementation of a system for automating aforementioned steps and provide an effective and convenient interface to define new spatial relations and generate representable relational algebras.
Resumo:
This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.
Resumo:
Les systèmes statistiques de traduction automatique ont pour tâche la traduction d’une langue source vers une langue cible. Dans la plupart des systèmes de traduction de référence, l'unité de base considérée dans l'analyse textuelle est la forme telle qu’observée dans un texte. Une telle conception permet d’obtenir une bonne performance quand il s'agit de traduire entre deux langues morphologiquement pauvres. Toutefois, ceci n'est plus vrai lorsqu’il s’agit de traduire vers une langue morphologiquement riche (ou complexe). Le but de notre travail est de développer un système statistique de traduction automatique comme solution pour relever les défis soulevés par la complexité morphologique. Dans ce mémoire, nous examinons, dans un premier temps, un certain nombre de méthodes considérées comme des extensions aux systèmes de traduction traditionnels et nous évaluons leurs performances. Cette évaluation est faite par rapport aux systèmes à l’état de l’art (système de référence) et ceci dans des tâches de traduction anglais-inuktitut et anglais-finnois. Nous développons ensuite un nouvel algorithme de segmentation qui prend en compte les informations provenant de la paire de langues objet de la traduction. Cet algorithme de segmentation est ensuite intégré dans le modèle de traduction à base d’unités lexicales « Phrase-Based Models » pour former notre système de traduction à base de séquences de segments. Enfin, nous combinons le système obtenu avec des algorithmes de post-traitement pour obtenir un système de traduction complet. Les résultats des expériences réalisées dans ce mémoire montrent que le système de traduction à base de séquences de segments proposé permet d’obtenir des améliorations significatives au niveau de la qualité de la traduction en terme de le métrique d’évaluation BLEU (Papineni et al., 2002) et qui sert à évaluer. Plus particulièrement, notre approche de segmentation réussie à améliorer légèrement la qualité de la traduction par rapport au système de référence et une amélioration significative de la qualité de la traduction est observée par rapport aux techniques de prétraitement de base (baseline).
Resumo:
L'ère numérique dans laquelle nous sommes entrés apporte une quantité importante de nouveaux défis à relever dans une multitude de domaines. Le traitement automatique de l'abondante information à notre disposition est l'un de ces défis, et nous allons ici nous pencher sur des méthodes et techniques adaptées au filtrage et à la recommandation à l'utilisateur d'articles adaptés à ses goûts, dans le contexte particulier et sans précédent notable du jeu vidéo multi-joueurs en ligne. Notre objectif est de prédire l'appréciation des niveaux par les joueurs. Au moyen d'algorithmes d'apprentissage machine modernes tels que les réseaux de neurones profonds avec pré-entrainement non-supervisé, que nous décrivons après une introduction aux concepts nécessaires à leur bonne compréhension, nous proposons deux architectures aux caractéristiques différentes bien que basées sur ce même concept d'apprentissage profond. La première est un réseau de neurones multi-couches pour lequel nous tentons d'expliquer les performances variables que nous rapportons sur les expériences menées pour diverses variations de profondeur, d'heuristique d'entraînement, et des méthodes de pré-entraînement non-supervisé simple, débruitant et contractant. Pour la seconde architecture, nous nous inspirons des modèles à énergie et proposons de même une explication des résultats obtenus, variables eux aussi. Enfin, nous décrivons une première tentative fructueuse d'amélioration de cette seconde architecture au moyen d'un fine-tuning supervisé succédant le pré-entrainement, puis une seconde tentative où ce fine-tuning est fait au moyen d'un critère d'entraînement semi-supervisé multi-tâches. Nos expériences montrent des performances prometteuses, notament avec l'architecture inspirée des modèles à énergie, justifiant du moins l'utilisation d'algorithmes d'apprentissage profonds pour résoudre le problème de la recommandation.