818 resultados para competence network model
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
Resumo:
La distorsione della percezione della distanza tra due stimoli puntuali applicati sulla superfice della pelle di diverse regioni corporee è conosciuta come Illusione di Weber. Questa illusione è stata osservata, e verificata, in molti esperimenti in cui ai soggetti era chiesto di giudicare la distanza tra due stimoli applicati sulla superficie della pelle di differenti parti corporee. Da tali esperimenti si è dedotto che una stessa distanza tra gli stimoli è giudicata differentemente per diverse regioni corporee. Il concetto secondo cui la distanza sulla pelle è spesso percepita in maniera alterata è ampiamente condiviso, ma i meccanismi neurali che manovrano questa illusione sono, allo stesso tempo, ancora ampiamente sconosciuti. In particolare, non è ancora chiaro come sia interpretata la distanza tra due stimoli puntuali simultanei, e quali aree celebrali siano coinvolte in questa elaborazione. L’illusione di Weber può essere spiegata, in parte, considerando la differenza in termini di densità meccano-recettoriale delle differenti regioni corporee, e l’immagine distorta del nostro corpo che risiede nella Corteccia Primaria Somato-Sensoriale (homunculus). Tuttavia, questi meccanismi sembrano non sufficienti a spiegare il fenomeno osservato: infatti, secondo i risultati derivanti da 100 anni di sperimentazioni, le distorsioni effettive nel giudizio delle distanze sono molto più piccole rispetto alle distorsioni che la Corteccia Primaria suggerisce. In altre parole, l’illusione osservata negli esperimenti tattili è molto più piccola rispetto all’effetto prodotto dalla differente densità recettoriale che affligge le diverse parti del corpo, o dall’estensione corticale. Ciò, ha portato a ipotizzare che la percezione della distanza tattile richieda la presenza di un’ulteriore area celebrale, e di ulteriori meccanismi che operino allo scopo di ridimensionare – almeno parzialmente – le informazioni derivanti dalla corteccia primaria, in modo da mantenere una certa costanza nella percezione della distanza tattile lungo la superfice corporea. E’ stata così proposta la presenza di una sorta di “processo di ridimensionamento”, chiamato “Rescaling Process” che opera per ridurre questa illusione verso una percezione più verosimile. Il verificarsi di questo processo è sostenuto da molti ricercatori in ambito neuro scientifico; in particolare, dal Dr. Matthew Longo, neuro scienziato del Department of Psychological Sciences (Birkbeck University of London), le cui ricerche sulla percezione della distanza tattile e sulla rappresentazione corporea sembrano confermare questa ipotesi. Tuttavia, i meccanismi neurali, e i circuiti che stanno alla base di questo potenziale “Rescaling Process” sono ancora ampiamente sconosciuti. Lo scopo di questa tesi è stato quello di chiarire la possibile organizzazione della rete, e i meccanismi neurali che scatenano l’illusione di Weber e il “Rescaling Process”, usando un modello di rete neurale. La maggior parte del lavoro è stata svolta nel Dipartimento di Scienze Psicologiche della Birkbeck University of London, sotto la supervisione del Dott. M. Longo, il quale ha contribuito principalmente all’interpretazione dei risultati del modello, dando suggerimenti sull’elaborazione dei risultati in modo da ottenere un’informazione più chiara; inoltre egli ha fornito utili direttive per la validazione dei risultati durante l’implementazione di test statistici. Per replicare l’illusione di Weber ed il “Rescaling Proess”, la rete neurale è stata organizzata con due strati principali di neuroni corrispondenti a due differenti aree funzionali corticali: • Primo strato di neuroni (il quale dà il via ad una prima elaborazione degli stimoli esterni): questo strato può essere pensato come parte della Corteccia Primaria Somato-Sensoriale affetta da Magnificazione Corticale (homunculus). • Secondo strato di neuroni (successiva elaborazione delle informazioni provenienti dal primo strato): questo strato può rappresentare un’Area Corticale più elevata coinvolta nell’implementazione del “Rescaling Process”. Le reti neurali sono state costruite includendo connessioni sinaptiche all’interno di ogni strato (Sinapsi Laterali), e connessioni sinaptiche tra i due strati neurali (Sinapsi Feed-Forward), assumendo inoltre che l’attività di ogni neurone dipenda dal suo input attraverso una relazione sigmoidale statica, cosi come da una dinamica del primo ordine. In particolare, usando la struttura appena descritta, sono state implementate due differenti reti neurali, per due differenti regioni corporee (per esempio, Mano e Braccio), caratterizzate da differente risoluzione tattile e differente Magnificazione Corticale, in modo da replicare l’Illusione di Weber ed il “Rescaling Process”. Questi modelli possono aiutare a comprendere il meccanismo dell’illusione di Weber e dare così una possibile spiegazione al “Rescaling Process”. Inoltre, le reti neurali implementate forniscono un valido contributo per la comprensione della strategia adottata dal cervello nell’interpretazione della distanza sulla superficie della pelle. Oltre allo scopo di comprensione, tali modelli potrebbero essere impiegati altresì per formulare predizioni che potranno poi essere verificate in seguito, in vivo, su soggetti reali attraverso esperimenti di percezione tattile. E’ importante sottolineare che i modelli implementati sono da considerarsi prettamente come modelli funzionali e non intendono replicare dettagli fisiologici ed anatomici. I principali risultati ottenuti tramite questi modelli sono la riproduzione del fenomeno della “Weber’s Illusion” per due differenti regioni corporee, Mano e Braccio, come riportato nei tanti articoli riguardanti le illusioni tattili (per esempio “The perception of distance and location for dual tactile pressures” di Barry G. Green). L’illusione di Weber è stata registrata attraverso l’output delle reti neurali, e poi rappresentata graficamente, cercando di spiegare le ragioni di tali risultati.
Resumo:
L’interazione che abbiamo con l’ambiente che ci circonda dipende sia da diverse tipologie di stimoli esterni che percepiamo (tattili, visivi, acustici, ecc.) sia dalla loro elaborazione per opera del nostro sistema nervoso. A volte però, l’integrazione e l’elaborazione di tali input possono causare effetti d’illusione. Ciò si presenta, ad esempio, nella percezione tattile. Infatti, la percezione di distanze tattili varia al variare della regione corporea considerata. Il concetto che distanze sulla cute siano frequentemente erroneamente percepite, è stato scoperto circa un secolo fa da Weber. In particolare, una determinata distanza fisica, è percepita maggiore su parti del corpo che presentano una più alta densità di meccanocettori rispetto a distanze applicate su parti del corpo con inferiore densità. Oltre a questa illusione, un importante fenomeno osservato in vivo è rappresentato dal fatto che la percezione della distanza tattile dipende dall’orientazione degli stimoli applicati sulla cute. In sostanza, la distanza percepita su una regione cutanea varia al variare dell’orientazione degli stimoli applicati. Recentemente, Longo e Haggard (Longo & Haggard, J.Exp.Psychol. Hum Percept Perform 37: 720-726, 2011), allo scopo di investigare come sia rappresentato il nostro corpo all’interno del nostro cervello, hanno messo a confronto distanze tattili a diverse orientazioni sulla mano deducendo che la distanza fra due stimoli puntuali è percepita maggiore se applicata trasversalmente sulla mano anziché longitudinalmente. Tale illusione è nota con il nome di Illusione Tattile Orientazione-Dipendente e diversi risultati riportati in letteratura dimostrano che tale illusione dipende dalla distanza che intercorre fra i due stimoli puntuali sulla cute. Infatti, Green riporta in un suo articolo (Green, Percpept Pshycophys 31, 315-323, 1982) il fatto che maggiore sia la distanza applicata e maggiore risulterà l’effetto illusivo che si presenta. L’illusione di Weber e l’illusione tattile orientazione-dipendente sono spiegate in letteratura considerando differenze riguardanti la densità di recettori, gli effetti di magnificazione corticale a livello della corteccia primaria somatosensoriale (regioni della corteccia somatosensoriale, di dimensioni differenti, sono adibite a diverse regioni corporee) e differenze nella dimensione e forma dei campi recettivi. Tuttavia tali effetti di illusione risultano molto meno rilevanti rispetto a quelli che ci si aspetta semplicemente considerando i meccanismi fisiologici, elencati in precedenza, che li causano. Ciò suggerisce che l’informazione tattile elaborata a livello della corteccia primaria somatosensoriale, riceva successivi step di elaborazione in aree corticali di più alto livello. Esse agiscono allo scopo di ridurre il divario fra distanza percepita trasversalmente e distanza percepita longitudinalmente, rendendole più simili tra loro. Tale processo assume il nome di “Rescaling Process”. I meccanismi neurali che operano nel cervello allo scopo di garantire Rescaling Process restano ancora largamente sconosciuti. Perciò, lo scopo del mio progetto di tesi è stato quello di realizzare un modello di rete neurale che simulasse gli aspetti riguardanti la percezione tattile, l’illusione orientazione-dipendente e il processo di rescaling avanzando possibili ipotesi circa i meccanismi neurali che concorrono alla loro realizzazione. Il modello computazionale si compone di due diversi layers neurali che processano l’informazione tattile. Uno di questi rappresenta un’area corticale di più basso livello (chiamata Area1) nella quale una prima e distorta rappresentazione tattile è realizzata. Per questo, tale layer potrebbe rappresentare un’area della corteccia primaria somatosensoriale, dove la rappresentazione della distanza tattile è significativamente distorta a causa dell’anisotropia dei campi recettivi e della magnificazione corticale. Il secondo layer (chiamato Area2) rappresenta un’area di più alto livello che riceve le informazioni tattili dal primo e ne riduce la loro distorsione mediante Rescaling Process. Questo layer potrebbe rappresentare aree corticali superiori (ad esempio la corteccia parietale o quella temporale) adibite anch’esse alla percezione di distanze tattili ed implicate nel Rescaling Process. Nel modello, i neuroni in Area1 ricevono informazioni dagli stimoli esterni (applicati sulla cute) inviando quindi informazioni ai neuroni in Area2 mediante sinapsi Feed-forward eccitatorie. Di fatto, neuroni appartenenti ad uno stesso layer comunicano fra loro attraverso sinapsi laterali aventi una forma a cappello Messicano. E’ importante affermare che la rete neurale implementata è principalmente un modello concettuale che non si preme di fornire un’accurata riproduzione delle strutture fisiologiche ed anatomiche. Per questo occorre considerare un livello astratto di implementazione senza specificare un’esatta corrispondenza tra layers nel modello e regioni anatomiche presenti nel cervello. Tuttavia, i meccanismi inclusi nel modello sono biologicamente plausibili. Dunque la rete neurale può essere utile per una migliore comprensione dei molteplici meccanismi agenti nel nostro cervello, allo scopo di elaborare diversi input tattili. Infatti, il modello è in grado di riprodurre diversi risultati riportati negli articoli di Green e Longo & Haggard.
Resumo:
Motivated by the need to understand which are the underlying forces that trigger network evolution, we develop a multilevel theoretical and empirically testable model to examine the relationship between changes in the external environment and network change. We refer to network change as the dissolution or replacement of an interorganizational tie, adding also the case of the formation of new ties with new or preexisting partners. Previous research has paid scant attention to the organizational consequences of quantum change enveloping entire industries in favor of an emphasis on continuous change. To highlight radical change we introduce the concept of environmental jolt. The September 11 terrorist attacks provide us with a natural experiment to test our hypotheses on the antecedents and the consequences of network change. Since network change can be explained at multiple levels, we incorporate firm-level variables as moderators. The empirical setting is the global airline industry, which can be regarded as a constantly changing network of alliances. The study reveals that firms react to environmental jolts by forming homophilous ties and transitive triads as opposed to the non jolt periods. Moreover, we find that, all else being equal, firms that adopt a brokerage posture will have positive returns. However, we find that in the face of an environmental jolt brokerage relates negatively to firm performance. Furthermore, we find that the negative relationship between brokerage and performance during an environmental jolt is more significant for larger firms. Our findings suggest that jolts are an important predictor of network change, that they significantly affect operational returns and should be thus incorporated in studies of network dynamics.
Resumo:
Summary PhD Thesis Jan Pollmann: This thesis focuses on global scale measurements of light reactive non-methane hydrocarbon (NMHC), in the volatility range from ethane to toluene with a special focus on ethane, propane, isobutane, butane, isopentane and pentane. Even though they only occur at the ppt level (nmol mol-1) in the remote troposphere these species can yield insight into key atmospheric processes. An analytical method was developed and subsequently evaluated to analyze NMHC from the NOAA – ERSL cooperative air sampling network. Potential analytical interferences through other atmospheric trace gases (water vapor and ozone) were carefully examined. The analytical parameters accuracy and precision were analyzed in detail. It was proven that more than 90% of the data points meet the Global Atmospheric Watch (GAW) data quality objective. Trace gas measurements from 28 measurement stations were used to derive the global atmospheric distribution profile for 4 NMHC (ethane, propane, isobutane, butane). A close comparison of the derived ethane data with previously published reports showed that northern hemispheric ethane background mixing ratio declined by approximately 30% since 1990. No such change was observed for southern hemispheric ethane. The NMHC data and trace gas data supplied by NOAA ESRL were used to estimate local diurnal averaged hydroxyl radical (OH) mixing ratios by variability analysis. Comparison of the variability derived OH with directly measured OH and modeled OH mixing ratios were found in good agreement outside the tropics. Tropical OH was on average two times higher than predicted by the model. Variability analysis was used to assess the effect of chlorine radicals on atmospheric oxidation chemistry. It was found that Cl is probably not of significant relevance on a global scale.
Resumo:
In this work the numerical coupling of thermal and electric network models with model equations for optoelectronic semiconductor devices is presented. Modified nodal analysis (MNA) is applied to model electric networks. Thermal effects are modeled by an accompanying thermal network. Semiconductor devices are modeled by the energy-transport model, that allows for thermal effects. The energy-transport model is expandend to a model for optoelectronic semiconductor devices. The temperature of the crystal lattice of the semiconductor devices is modeled by the heat flow eqaution. The corresponding heat source term is derived under thermodynamical and phenomenological considerations of energy fluxes. The energy-transport model is coupled directly into the network equations and the heat flow equation for the lattice temperature is coupled directly into the accompanying thermal network. The coupled thermal-electric network-device model results in a system of partial differential-algebraic equations (PDAE). Numerical examples are presented for the coupling of network- and one-dimensional semiconductor equations. Hybridized mixed finite elements are applied for the space discretization of the semiconductor equations. Backward difference formluas are applied for time discretization. Thus, positivity of charge carrier densities and continuity of the current density is guaranteed even for the coupled model.
Resumo:
The market’s challenges bring firms to collaborate with other organizations in order to create Joint Ventures, Alliances and Consortia that are defined as “Interorganizational Networks” (IONs) (Provan, Fish and Sydow; 2007). Some of these IONs are managed through a shared partecipant governance (Provan and Kenis, 2008): a team composed by entrepreneurs and/or directors of each firm of an ION. The research is focused on these kind of management teams and it is based on an input-process-output model: some input variables (work group’s diversity, intra-team's friendship network density) have a direct influence on the process (team identification, shared leadership, interorganizational trust, team trust and intra-team's communication network density), which influence some team outputs, individual innovation behaviors and team effectiveness (team performance, work group satisfaction and ION affective commitment). Data was collected on a sample of 101 entrepreneurs grouped in 28 ION’s government teams and the research hypotheses are tested trough the path analysis and the multilevel models. As expected trust in team and shared leadership are positively and directly related to team effectiveness while team identification and interorganizational trust are indirectly related to the team outputs. The friendship network density among the team’s members has got positive effects on the trust in team and on the communication network density, and also, through the communication network density it improves the level of the teammates ION affective commitment. The shared leadership and its effects on the team effectiveness are fostered from higher level of team identification and weakened from higher level of work group diversity, specifically gender diversity. Finally, the communication network density and shared leadership at the individual level are related to the frequency of individual innovative behaviors. The dissertation’s results give a wider and more precise indication about the management of interfirm network through “shared” form of governance.
Resumo:
During my PhD,I have been develop an innovative technique to reproduce in vitro the 3D thymic microenvironment, to be used for growth and differentiation of thymocytes, and possible transplantation replacement in conditions of depressed thymic immune regulation. The work has been developed in the laboratory of Tissue Engineering at the University Hospital in Basel, Switzerland, under the tutorship of Prof.Ivan Martin. Since a number of studies have suggested that the 3D structure of the thymic microenvironment might play a key role in regulating the survival and functional competence of thymocytes, I’ve focused my effort on the isolation and purification of the extracellular matrix of the mouse thymus. Specifically, based on the assumption that TEC can favour the differentiation of pre-T lymphocytes, I’ve developed a specific decellularization protocol to obtain the intact, DNA-free extracellular matrix of the adult mouse thymus. Two different protocols satisfied the main characteristics of a decellularized matrix, according to qualitative and quantitative assays. In particular, the quantity of DNA was less than 10% in absolute value, no positive staining for cells was found and the 3D structure and composition of the ECM were maintained. In addition, I was able to prove that the decellularized matrixes were not cytotoxic for the cells themselves, and were able to increase expression of MHC II antigens compared to control cells grown in standard conditions. I was able to prove that TECs grow and proliferate up to ten days on top the decellularized matrix. After a complete characterization of the culture system, these innovative natural scaffolds could be used to improve the standard culture conditions of TEC, to study in vitro the action of different factors on their differentiation genes, and to test the ability of TECs to induce in vitro maturation of seeded T lymphocytes.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
This study deals with the internationalization behavior of a new and specific type of e-business company, namely the network managing e-business company (NM-EBC). The business model of such e-business companies is based on providing a platform and applications for users to connect and interact, on gathering and channeling the inputs provided by the users, and on organizing and managing the cross-relationships of the various participants. Examples are online communities, matching platforms, and portals. Since NM-EBCs internationalize by replicating their business model in a foreign market and by building up and managing a network of users, who provide input themselves and interact with each other, they have to convince users in foreign markets to join the network and hence to adopt their platform. We draw upon Rogers’ Diffusion of Innovations Theory and Network Theory to explain the internationalization behavior of NM-EBCs. These two theories originate from neighboring disciplines and have not yet been used to explain the internationalization of firms. We combine both theories and formulate hypotheses about which strategies NM-EBCs may choose to expand abroad. To test the applicability of our theory and to gain rich data about the internationalization behavior of these firms, we carried out multiple case studies with internationally active Germany-based NM-EBCs.
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
This thesis presents a new Artificial Neural Network (ANN) able to predict at once the main parameters representative of the wave-structure interaction processes, i.e. the wave overtopping discharge, the wave transmission coefficient and the wave reflection coefficient. The new ANN has been specifically developed in order to provide managers and scientists with a tool that can be efficiently used for design purposes. The development of this ANN started with the preparation of a new extended and homogeneous database that collects all the available tests reporting at least one of the three parameters, for a total amount of 16’165 data. The variety of structure types and wave attack conditions in the database includes smooth, rock and armour unit slopes, berm breakwaters, vertical walls, low crested structures, oblique wave attacks. Some of the existing ANNs were compared and improved, leading to the selection of a final ANN, whose architecture was optimized through an in-depth sensitivity analysis to the training parameters of the ANN. Each of the selected 15 input parameters represents a physical aspect of the wave-structure interaction process, describing the wave attack (wave steepness and obliquity, breaking and shoaling factors), the structure geometry (submergence, straight or non-straight slope, with or without berm or toe, presence or not of a crown wall), or the structure type (smooth or covered by an armour layer, with permeable or impermeable core). The advanced ANN here proposed provides accurate predictions for all the three parameters, and demonstrates to overcome the limits imposed by the traditional formulae and approach adopted so far by some of the existing ANNs. The possibility to adopt just one model to obtain a handy and accurate evaluation of the overall performance of a coastal or harbor structure represents the most important and exportable result of the work.
Resumo:
The human airway epithelium serves as structural and functional barrier against inhaled particulate antigen. Previously, we demonstrated in an in vitro epithelial barrier model that monocyte derived dendritic cells (MDDC) and monocyte derived macrophages (MDM) take up particulate antigen by building a trans-epithelial interacting network. Although the epithelial tight junction (TJ) belt was penetrated by processes of MDDC and MDM, the integrity of the epithelium was not affected. These results brought up two main questions: (1) Do MDM and MDDC exchange particles? (2) Are those cells expressing TJ proteins, which are believed to interact with the TJ belt of the epithelium to preserve the epithelial integrity? The expression of TJ and adherens junction (AJ) mRNA and proteins in MDM and MDDC monocultures was determined by RT-PCR, and immunofluorescence, respectively. Particle uptake and exchange was quantified by flow cytometry and laser scanning microscopy in co-cultures of MDM and MDDC exposed to polystyrene particles (1 μm in diameter). MDM and MDDC constantly expressed TJ and AJ mRNA and proteins. Flow cytometry analysis of MDM and MDDC co-cultures showed increased particle uptake in MDDC while MDM lost particles over time. Quantitative analysis revealed significantly higher particle uptake by MDDC in co-cultures of epithelial cells with MDM and MDDC present, compared to co-cultures containing only epithelial cells and MDDC. We conclude from these findings that MDM and MDDC express TJ and AJ proteins which could help to preserve the epithelial integrity during particle uptake and exchange across the lung epithelium.
Resumo:
OBJECTIVE: To determine the effect of glucosamine, chondroitin, or the two in combination on joint pain and on radiological progression of disease in osteoarthritis of the hip or knee. Design Network meta-analysis. Direct comparisons within trials were combined with indirect evidence from other trials by using a Bayesian model that allowed the synthesis of multiple time points. MAIN OUTCOME MEASURE: Pain intensity. Secondary outcome was change in minimal width of joint space. The minimal clinically important difference between preparations and placebo was prespecified at -0.9 cm on a 10 cm visual analogue scale. DATA SOURCES: Electronic databases and conference proceedings from inception to June 2009, expert contact, relevant websites. Eligibility criteria for selecting studies Large scale randomised controlled trials in more than 200 patients with osteoarthritis of the knee or hip that compared glucosamine, chondroitin, or their combination with placebo or head to head. Results 10 trials in 3803 patients were included. On a 10 cm visual analogue scale the overall difference in pain intensity compared with placebo was -0.4 cm (95% credible interval -0.7 to -0.1 cm) for glucosamine, -0.3 cm (-0.7 to 0.0 cm) for chondroitin, and -0.5 cm (-0.9 to 0.0 cm) for the combination. For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference. Industry independent trials showed smaller effects than commercially funded trials (P=0.02 for interaction). The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero. Conclusions Compared with placebo, glucosamine, chondroitin, and their combination do not reduce joint pain or have an impact on narrowing of joint space. Health authorities and health insurers should not cover the costs of these preparations, and new prescriptions to patients who have not received treatment should be discouraged.