902 resultados para non-cooperative network formation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Here we present monthly, basin-wide maps of the partial pressure of carbon dioxide (pCO2) for the North Atlantic on a latitude by longitude grid for years 2004 through 2006 inclusive. The maps have been computed using a neural network technique which reconstructs the non-linear relationships between three biogeochemical parameters and marine pCO2. A self organizing map (SOM) neural network has been trained using 389 000 triplets of the SeaWiFSMODIS chlorophyll-a concentration, the NCEP/NCAR reanalysis sea surface temperature, and the FOAM mixed layer depth. The trained SOM was labelled with 137 000 underway pCO2 measurements collected in situ during 2004, 2005 and 2006 in the North Atlantic, spanning the range of 208 to 437atm. The root mean square error (RMSE) of the neural network fit to the data is 11.6?atm, which equals to just above 3 per cent of an average pCO2 value in the in situ dataset. The seasonal pCO2 cycle as well as estimates of the interannual variability in the major biogeochemical provinces are presented and discussed. High resolution combined with basin-wide coverage makes the maps a useful tool for several applications such as the monitoring of basin-wide air-sea CO2 fluxes or improvement of seasonal and interannual marine CO2 cycles in future model predictions. The method itself is a valuable alternative to traditional statistical modelling techniques used in geosciences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il distretto è un luogo relazionale dinamico dove le imprese danno luogo a differenti comportamenti economici di vario genere e natura, cooperando in un certo senso per lo sviluppo e la crescita del distretto stesso. In un primo momento di formazione del distretto si sono delineati comportamenti di tipo path dependent per vantaggi economici dovuti alla distribuzione delle imprese nel territorio, ma con il tempo si sono cominciati ad avere comportamenti espansionistici differenti sia dall'interno che dall'esterno del distretto influendo direttamente sulla struttura del stesso. É ragionevole dunque pensare che gli attori guardino al rapporto “locale/globale” con una sorta di "strabismo", da un lato leggendo il distretto (dall’interno come dall’esterno) come un luogo privilegiato per la formazione di economie di prossimità, dall’altro puntando a disporre le catene produttive nello spazio globale, alla ricerca dei vantaggi derivanti da un minor costo del lavoro o dalla immediata prossimità dei mercati di sbocco. il distretto viene dunque attraversato da dinamiche che lo globalizzano ma, al contempo, ne preservano (almeno per ora) la specificità. Non è più possibile leggere la sua forma economica solo nella logica della embeddedness, e non sarebbe certo corretto farlo solo in chiave di openness. Si tratta dunque di interrogarsi sul rapporto più di integrazione/complementarità che di contrapposizione fra openness ed embeddedness. In questa tesi verrà descritto un metodo d'approccio per dare un valore al fenomeno di Openness e Embeddedness presente nel distretto partendo da un dataset di dati relazionali ricavati da due database economici Amadeus e Aida. Non essendo possibile trovare pubblicamente dati sulle reti di fornitura delle singole aziende, siamo partiti dai dati relazionali di cinque aziende “seme”, ed attraverso una ricerca ricorsiva nelle relazioni di azionariato/partecipazione, siamo riusciti ad ottenere un campione di analisi che ci permette di mettere in luce tramite la custer analysis le principali tipologie di reti di imprese presenti nel distretto ed estese nello spazio globale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La distorsione della percezione della distanza tra due stimoli puntuali applicati sulla superfice della pelle di diverse regioni corporee è conosciuta come Illusione di Weber. Questa illusione è stata osservata, e verificata, in molti esperimenti in cui ai soggetti era chiesto di giudicare la distanza tra due stimoli applicati sulla superficie della pelle di differenti parti corporee. Da tali esperimenti si è dedotto che una stessa distanza tra gli stimoli è giudicata differentemente per diverse regioni corporee. Il concetto secondo cui la distanza sulla pelle è spesso percepita in maniera alterata è ampiamente condiviso, ma i meccanismi neurali che manovrano questa illusione sono, allo stesso tempo, ancora ampiamente sconosciuti. In particolare, non è ancora chiaro come sia interpretata la distanza tra due stimoli puntuali simultanei, e quali aree celebrali siano coinvolte in questa elaborazione. L’illusione di Weber può essere spiegata, in parte, considerando la differenza in termini di densità meccano-recettoriale delle differenti regioni corporee, e l’immagine distorta del nostro corpo che risiede nella Corteccia Primaria Somato-Sensoriale (homunculus). Tuttavia, questi meccanismi sembrano non sufficienti a spiegare il fenomeno osservato: infatti, secondo i risultati derivanti da 100 anni di sperimentazioni, le distorsioni effettive nel giudizio delle distanze sono molto più piccole rispetto alle distorsioni che la Corteccia Primaria suggerisce. In altre parole, l’illusione osservata negli esperimenti tattili è molto più piccola rispetto all’effetto prodotto dalla differente densità recettoriale che affligge le diverse parti del corpo, o dall’estensione corticale. Ciò, ha portato a ipotizzare che la percezione della distanza tattile richieda la presenza di un’ulteriore area celebrale, e di ulteriori meccanismi che operino allo scopo di ridimensionare – almeno parzialmente – le informazioni derivanti dalla corteccia primaria, in modo da mantenere una certa costanza nella percezione della distanza tattile lungo la superfice corporea. E’ stata così proposta la presenza di una sorta di “processo di ridimensionamento”, chiamato “Rescaling Process” che opera per ridurre questa illusione verso una percezione più verosimile. Il verificarsi di questo processo è sostenuto da molti ricercatori in ambito neuro scientifico; in particolare, dal Dr. Matthew Longo, neuro scienziato del Department of Psychological Sciences (Birkbeck University of London), le cui ricerche sulla percezione della distanza tattile e sulla rappresentazione corporea sembrano confermare questa ipotesi. Tuttavia, i meccanismi neurali, e i circuiti che stanno alla base di questo potenziale “Rescaling Process” sono ancora ampiamente sconosciuti. Lo scopo di questa tesi è stato quello di chiarire la possibile organizzazione della rete, e i meccanismi neurali che scatenano l’illusione di Weber e il “Rescaling Process”, usando un modello di rete neurale. La maggior parte del lavoro è stata svolta nel Dipartimento di Scienze Psicologiche della Birkbeck University of London, sotto la supervisione del Dott. M. Longo, il quale ha contribuito principalmente all’interpretazione dei risultati del modello, dando suggerimenti sull’elaborazione dei risultati in modo da ottenere un’informazione più chiara; inoltre egli ha fornito utili direttive per la validazione dei risultati durante l’implementazione di test statistici. Per replicare l’illusione di Weber ed il “Rescaling Proess”, la rete neurale è stata organizzata con due strati principali di neuroni corrispondenti a due differenti aree funzionali corticali: • Primo strato di neuroni (il quale dà il via ad una prima elaborazione degli stimoli esterni): questo strato può essere pensato come parte della Corteccia Primaria Somato-Sensoriale affetta da Magnificazione Corticale (homunculus). • Secondo strato di neuroni (successiva elaborazione delle informazioni provenienti dal primo strato): questo strato può rappresentare un’Area Corticale più elevata coinvolta nell’implementazione del “Rescaling Process”. Le reti neurali sono state costruite includendo connessioni sinaptiche all’interno di ogni strato (Sinapsi Laterali), e connessioni sinaptiche tra i due strati neurali (Sinapsi Feed-Forward), assumendo inoltre che l’attività di ogni neurone dipenda dal suo input attraverso una relazione sigmoidale statica, cosi come da una dinamica del primo ordine. In particolare, usando la struttura appena descritta, sono state implementate due differenti reti neurali, per due differenti regioni corporee (per esempio, Mano e Braccio), caratterizzate da differente risoluzione tattile e differente Magnificazione Corticale, in modo da replicare l’Illusione di Weber ed il “Rescaling Process”. Questi modelli possono aiutare a comprendere il meccanismo dell’illusione di Weber e dare così una possibile spiegazione al “Rescaling Process”. Inoltre, le reti neurali implementate forniscono un valido contributo per la comprensione della strategia adottata dal cervello nell’interpretazione della distanza sulla superficie della pelle. Oltre allo scopo di comprensione, tali modelli potrebbero essere impiegati altresì per formulare predizioni che potranno poi essere verificate in seguito, in vivo, su soggetti reali attraverso esperimenti di percezione tattile. E’ importante sottolineare che i modelli implementati sono da considerarsi prettamente come modelli funzionali e non intendono replicare dettagli fisiologici ed anatomici. I principali risultati ottenuti tramite questi modelli sono la riproduzione del fenomeno della “Weber’s Illusion” per due differenti regioni corporee, Mano e Braccio, come riportato nei tanti articoli riguardanti le illusioni tattili (per esempio “The perception of distance and location for dual tactile pressures” di Barry G. Green). L’illusione di Weber è stata registrata attraverso l’output delle reti neurali, e poi rappresentata graficamente, cercando di spiegare le ragioni di tali risultati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’interazione che abbiamo con l’ambiente che ci circonda dipende sia da diverse tipologie di stimoli esterni che percepiamo (tattili, visivi, acustici, ecc.) sia dalla loro elaborazione per opera del nostro sistema nervoso. A volte però, l’integrazione e l’elaborazione di tali input possono causare effetti d’illusione. Ciò si presenta, ad esempio, nella percezione tattile. Infatti, la percezione di distanze tattili varia al variare della regione corporea considerata. Il concetto che distanze sulla cute siano frequentemente erroneamente percepite, è stato scoperto circa un secolo fa da Weber. In particolare, una determinata distanza fisica, è percepita maggiore su parti del corpo che presentano una più alta densità di meccanocettori rispetto a distanze applicate su parti del corpo con inferiore densità. Oltre a questa illusione, un importante fenomeno osservato in vivo è rappresentato dal fatto che la percezione della distanza tattile dipende dall’orientazione degli stimoli applicati sulla cute. In sostanza, la distanza percepita su una regione cutanea varia al variare dell’orientazione degli stimoli applicati. Recentemente, Longo e Haggard (Longo & Haggard, J.Exp.Psychol. Hum Percept Perform 37: 720-726, 2011), allo scopo di investigare come sia rappresentato il nostro corpo all’interno del nostro cervello, hanno messo a confronto distanze tattili a diverse orientazioni sulla mano deducendo che la distanza fra due stimoli puntuali è percepita maggiore se applicata trasversalmente sulla mano anziché longitudinalmente. Tale illusione è nota con il nome di Illusione Tattile Orientazione-Dipendente e diversi risultati riportati in letteratura dimostrano che tale illusione dipende dalla distanza che intercorre fra i due stimoli puntuali sulla cute. Infatti, Green riporta in un suo articolo (Green, Percpept Pshycophys 31, 315-323, 1982) il fatto che maggiore sia la distanza applicata e maggiore risulterà l’effetto illusivo che si presenta. L’illusione di Weber e l’illusione tattile orientazione-dipendente sono spiegate in letteratura considerando differenze riguardanti la densità di recettori, gli effetti di magnificazione corticale a livello della corteccia primaria somatosensoriale (regioni della corteccia somatosensoriale, di dimensioni differenti, sono adibite a diverse regioni corporee) e differenze nella dimensione e forma dei campi recettivi. Tuttavia tali effetti di illusione risultano molto meno rilevanti rispetto a quelli che ci si aspetta semplicemente considerando i meccanismi fisiologici, elencati in precedenza, che li causano. Ciò suggerisce che l’informazione tattile elaborata a livello della corteccia primaria somatosensoriale, riceva successivi step di elaborazione in aree corticali di più alto livello. Esse agiscono allo scopo di ridurre il divario fra distanza percepita trasversalmente e distanza percepita longitudinalmente, rendendole più simili tra loro. Tale processo assume il nome di “Rescaling Process”. I meccanismi neurali che operano nel cervello allo scopo di garantire Rescaling Process restano ancora largamente sconosciuti. Perciò, lo scopo del mio progetto di tesi è stato quello di realizzare un modello di rete neurale che simulasse gli aspetti riguardanti la percezione tattile, l’illusione orientazione-dipendente e il processo di rescaling avanzando possibili ipotesi circa i meccanismi neurali che concorrono alla loro realizzazione. Il modello computazionale si compone di due diversi layers neurali che processano l’informazione tattile. Uno di questi rappresenta un’area corticale di più basso livello (chiamata Area1) nella quale una prima e distorta rappresentazione tattile è realizzata. Per questo, tale layer potrebbe rappresentare un’area della corteccia primaria somatosensoriale, dove la rappresentazione della distanza tattile è significativamente distorta a causa dell’anisotropia dei campi recettivi e della magnificazione corticale. Il secondo layer (chiamato Area2) rappresenta un’area di più alto livello che riceve le informazioni tattili dal primo e ne riduce la loro distorsione mediante Rescaling Process. Questo layer potrebbe rappresentare aree corticali superiori (ad esempio la corteccia parietale o quella temporale) adibite anch’esse alla percezione di distanze tattili ed implicate nel Rescaling Process. Nel modello, i neuroni in Area1 ricevono informazioni dagli stimoli esterni (applicati sulla cute) inviando quindi informazioni ai neuroni in Area2 mediante sinapsi Feed-forward eccitatorie. Di fatto, neuroni appartenenti ad uno stesso layer comunicano fra loro attraverso sinapsi laterali aventi una forma a cappello Messicano. E’ importante affermare che la rete neurale implementata è principalmente un modello concettuale che non si preme di fornire un’accurata riproduzione delle strutture fisiologiche ed anatomiche. Per questo occorre considerare un livello astratto di implementazione senza specificare un’esatta corrispondenza tra layers nel modello e regioni anatomiche presenti nel cervello. Tuttavia, i meccanismi inclusi nel modello sono biologicamente plausibili. Dunque la rete neurale può essere utile per una migliore comprensione dei molteplici meccanismi agenti nel nostro cervello, allo scopo di elaborare diversi input tattili. Infatti, il modello è in grado di riprodurre diversi risultati riportati negli articoli di Green e Longo & Haggard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Hepatitis C Virus (HCV) ist ein umhülltes RNA Virus aus der Familie der Flaviviridae. Sein Genom kodiert für ein ca. 3000 Aminosäuren langes Polyprotein, welches co- und posttranslational in seine funktionellen Einheiten gespalten wird. Eines dieser viralen Proteine ist NS5A. Es handelt sich hierbei um ein stark phosphoryliertes Protein, das eine amphipatische α-Helix im Amino-Terminus trägt, welche für die Membran-Assoziation von NS5A verantwortlich ist. Welche Rolle die Phosphorylierung für die Funktion des Proteins spielt, bzw. welche Funktion NS5A überhaupt ausübt, ist zur Zeit noch unklar. Beobachtungen lassen Vermutungen über eine Funktion von NS5A bei der Resistenz infizierter Zellen gegenüber Interferon-alpha zu. Weiterhin wird vermutet, das NS5A als Komponente des membranständigen HCV Replikasekomplexes an der RNA Replikation beteiligt ist. Das Ziel dieser Doktorarbeit war es, die Funktion von NS5A für die RNA Replikation zu untersuchen. Zu diesem Zweck wurde eine Serie von Phosphorylierungsstellen-Mutanten generiert, die auf Ihre Replikationsfähigkeit und den Phosphorylierungsstatus hin untersucht wurden. Wir fanden, dass bestimmte Serin-Substitutionen im Zentrum von NS5A zu einer gesteigerten RNA Replikation führten, bei gleichzeitig reduzierter NS5A Hyperphosphorylierung. Weiterhin studierten wir den Einfluß von Mutationen in der Amino-terminalen amphipatischen α-Helix von NS5A auf die RNA-Replikation, sowie Phosphorylierung und subzelluläre Lokalisation des Proteins. Wir fanden, dass geringfügige strukturelle Veränderungen der amphipatischen Helix zu einer veränderten subzellulären Lokalisation von NS5A führten, was mit einer reduzierten oder komplett inhibierten RNA Replikation einherging. Zudem interferierten die strukturellen Veränderungen mit der Hyperphosphorylierung des Proteins, was den Schluß nahe legt, dass die amphipatische Helix eine wichtige strukturelle Komponente des Proteins darstellt, die für die korrekte Faltung und Phosphorylierung des Proteins essentiell ist. Als weitere Aspekte wurden die Trans-Komplementationsfähigkeit der verschiedenen viralen Komponenten des HCV Replikasekomplexes untersucht, sowie zelluläre Interaktionspartner von NS5A identifiziert. Zusammenfassend zeigen die Ergebnisse dieser Doktorarbeit, dass NS5A eine wichtige Rolle bei der RNA-Replikation spielt. Diese Funktion wird wahrscheinlich über den Phosphorylierungszustand des Proteins reguliert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bacterial small regulatory RNAs (sRNAs) are posttranscriptional regulators involved in stress responses. These short non-coding transcripts are synthesised in response to a signal, and control gene expression of their regulons by modulating the translation or stability of the target mRNAs, often in concert with the RNA chaperone Hfq. Characterization of a Hfq knock out mutant in Neisseria meningitidis revealed that it has a pleiotropic phenotype, suggesting a major role for Hfq in adaptation to stresses and virulence and the presence of Hfq-dependent sRNA activity. Global gene expression analysis of regulated transcripts in the Hfq mutant revealed the presence of a regulated sRNA, incorrectly annotated as an open reading frame, which we renamed AniS. The synthesis of this novel sRNA is anaerobically induced through activation of its promoter by the FNR global regulator and through global gene expression analyses we identified at least two predicted mRNA targets of AniS. We also performed a detailed molecular analysis of the action of the sRNA NrrF,. We demonstrated that NrrF regulates succinate dehydrogenase by forming a duplex with a region of complementarity within the sdhDA region of the succinate dehydrogenase transcript, and Hfq enhances the binding of this sRNA to the identified target in the sdhCDAB mRNA; this is likely to result in rapid turnover of the transcript in vivo. In addition, in order to globally investigate other possible sRNAs of N. meningitdis we Deep-sequenced the transcriptome of this bacterium under both standard in vitro and iron-depleted conditions. This analysis revealed genes that were actively transcribed under the two conditions. We focused our attention on the transcribed non-coding regions of the genome and, along with 5’ and 3’ untranslated regions, 19 novel candidate sRNAs were identified. Further studies will be focused on the identification of the regulatory networks of these sRNAs, and their targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work has been realized by the author in his PhD course in Electronics, Computer Science and Telecommunication at the University of Bologna, Faculty of Engineering, Italy. The subject of this thesis regards important channel estimation aspects in wideband wireless communication systems, such as echo cancellation in digital video broadcasting systems and pilot aided channel estimation through an innovative pilot design in Multi-Cell Multi-User MIMO-OFDM network. All the documentation here reported is a summary of years of work, under the supervision of Prof. Oreste Andrisano, coordinator of Wireless Communication Laboratory - WiLab, in Bologna. All the instrumentation that has been used for the characterization of the telecommunication systems belongs to CNR (National Research Council), CNIT (Italian Inter-University Center), and DEIS (Dept. of Electronics, Computer Science, and Systems). From November 2009 to May 2010, the author spent his time abroad, working in collaboration with DOCOMO - Communications Laboratories Europe GmbH (DOCOMO Euro-Labs) in Munich, Germany, in the Wireless Technologies Research Group. Some important scientific papers, submitted and/or published on IEEE journals and conferences have been produced by the author.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uno dei principali ambiti di ricerca dell’intelligenza artificiale concerne la realizzazione di agenti (in particolare, robot) in grado di aiutare o sostituire l’uomo nell’esecuzione di determinate attività. A tal fine, è possibile procedere seguendo due diversi metodi di progettazione: la progettazione manuale e la progettazione automatica. Quest’ultima può essere preferita alla prima nei contesti in cui occorra tenere in considerazione requisiti quali flessibilità e adattamento, spesso essenziali per lo svolgimento di compiti non banali in contesti reali. La progettazione automatica prende in considerazione un modello col quale rappresentare il comportamento dell’agente e una tecnica di ricerca (oppure di apprendimento) che iterativamente modifica il modello al fine di renderlo il più adatto possibile al compito in esame. In questo lavoro, il modello utilizzato per la rappresentazione del comportamento del robot è una rete booleana (Boolean network o Kauffman network). La scelta di tale modello deriva dal fatto che possiede una semplice struttura che rende agevolmente studiabili le dinamiche tuttavia complesse che si manifestano al suo interno. Inoltre, la letteratura recente mostra che i modelli a rete, quali ad esempio le reti neuronali artificiali, si sono dimostrati efficaci nella programmazione di robot. La metodologia per l’evoluzione di tale modello riguarda l’uso di tecniche di ricerca meta-euristiche in grado di trovare buone soluzioni in tempi contenuti, nonostante i grandi spazi di ricerca. Lavori precedenti hanno gia dimostrato l’applicabilità e investigato la metodologia su un singolo robot. Lo scopo di questo lavoro è quello di fornire prova di principio relativa a un insieme di robot, aprendo nuove strade per la progettazione in swarm robotics. In questo scenario, semplici agenti autonomi, interagendo fra loro, portano all’emergere di un comportamento coordinato adempiendo a task impossibili per la singola unità. Questo lavoro fornisce utili ed interessanti opportunità anche per lo studio delle interazioni fra reti booleane. Infatti, ogni robot è controllato da una rete booleana che determina l’output in funzione della propria configurazione interna ma anche dagli input ricevuti dai robot vicini. In questo lavoro definiamo un task in cui lo swarm deve discriminare due diversi pattern sul pavimento dell’arena utilizzando solo informazioni scambiate localmente. Dopo una prima serie di esperimenti preliminari che hanno permesso di identificare i parametri e il migliore algoritmo di ricerca, abbiamo semplificato l’istanza del problema per meglio investigare i criteri che possono influire sulle prestazioni. E’ stata così identificata una particolare combinazione di informazione che, scambiata localmente fra robot, porta al miglioramento delle prestazioni. L’ipotesi è stata confermata applicando successivamente questo risultato ad un’istanza più difficile del problema. Il lavoro si conclude suggerendo nuovi strumenti per lo studio dei fenomeni emergenti in contesti in cui le reti booleane interagiscono fra loro.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this thesis is to analyze the possibility of using early-type galaxies to place evolutionary and cosmological constraints, by both disentangling what is the main driver of ETGs evolution between mass and environment, and developing a technique to constrain H(z) and the cosmological parameters studying the ETGs age-redshift relation. The (U-V) rest-frame color distribution is studied as a function of mass and environment for two sample of ETGs up to z=1, extracted from the zCOSMOS survey with a new selection criterion. The color distributions and the slopes of the color-mass and color-environment relations are studied, finding a strong dependence on mass and a minor dependence on environment. The spectral analysis performed on the D4000 and Hδ features gives results validating the previous analysis. The main driver of galaxy evolution is found to be the galaxy mass, the environment playing a subdominant but non negligible role. The age distribution of ETGs is also analyzed as a function of mass, providing strong evidences supporting a downsizing scenario. The possibility of setting cosmological constraints studying the age-redshift relation is studied, discussing the relative degeneracies and model dependencies. A new approach is developed, aiming to minimize the impact of systematics on the “cosmic chronometer” method. Analyzing theoretical models, it is demonstrated that the D4000 is a feature correlated almost linearly with age at fixed metallicity, depending only minorly on the models assumed or on the SFH chosen. The analysis of a SDSS sample of ETGs shows that it is possible to use the differential D4000 evolution of the galaxies to set constraints to cosmological parameters in an almost model-independent way. Values of the Hubble constant and of the dark energy EoS parameter are found, which are not only fully compatible, but also with a comparable error budget with the latest results.