862 resultados para large-scale network
Resumo:
The correlations between the evolution of the Super Massive Black Holes (SMBHs) and their host galaxies suggests that the SMBH accretion on sub-pc scales (active galactice nuclei, AGN) is linked to the building of the galaxy over kpc scales, through the so called AGN feedback. Most of the galaxy assembly occurs in overdense large scale structures (LSSs). AGN residing in powerful sources in LSSs, such as the proto-brightest cluster galaxies (BCGs), can affect the evolution of the surrounding intra-cluster medium (ICM) and nearby galaxies. Among distant AGN, high-redshift radio-galaxies (HzRGs) are found to be excellent BCG progenitor candidates. In this Thesis we analyze novel interferometric observations of the so-called "J1030" field centered around the z = 6.3 SDSS Quasar J1030+0524, carried out with the Atacama large (sub-)millimetre array (ALMA) and the Jansky very large array (JVLA). This field host a LSS assembling around a powerful HzRG at z = 1.7 that shows evidence of positive AGN feedback in heating the surrounding ICM and promoting star-formation in multiple galaxies at hundreds kpc distances. We report the detection of gas-rich members of the LSS, including the HzRG. We showed that the LSS is going to evolve into a local massive cluster and the HzRG is the proto-BCG. we unveiled signatures of the proto-BCG's interaction with the surrounding ICM, strengthening the positive AGN feedback scenario. From the JVLA observations of the "J1030" we extracted one of the deepest extra-galactic radio surveys to date (~12.5 uJy at 5 sigma). Exploiting the synergy with the X-ray deep survey (~500 ks) we investigated the relation of the X-ray/radio emission of a X-ray-selected sample, unveiling that the radio emission is powered by different processes (star-formation and AGN), and that AGN-driven sample is mostly composed by radio-quiet objects that display a significant X-ray/radio correlation.
Resumo:
In this thesis we present the development and the current status of the IFrameNet project, aimed at the construction of a large-scale lexical semantic resource for the Italian language based on Frame Semantics theories. We will begin by contextualizing our work in the wider context of Frame Semantics and of the FrameNet project, which, since 1997, has attempted to apply these theories to lexicography. We will then analyse and discuss the applicability of the structure of the American resource to Italian and more specifically we will focus on the domain of fear, worry, and anxiety. We will finally propose some modifications aimed at improving this domain of the resource in relation to its coherence, its ability to accurately represent the linguistic reality and in particular in order to make it possible to apply it to Italian.
Resumo:
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resumo:
Children who sustain a prenatal or perinatal brain injury in the form of a stroke develop remarkably normal cognitive functions in certain areas, with a particular strength in language skills. A dominant explanation for this is that brain regions from the contralesional hemisphere "take over" their functions, whereas the damaged areas and other ipsilesional regions play much less of a role. However, it is difficult to tease apart whether changes in neural activity after early brain injury are due to damage caused by the lesion or by processes related to postinjury reorganization. We sought to differentiate between these two causes by investigating the functional connectivity (FC) of brain areas during the resting state in human children with early brain injury using a computational model. We simulated a large-scale network consisting of realistic models of local brain areas coupled through anatomical connectivity information of healthy and injured participants. We then compared the resulting simulated FC values of healthy and injured participants with the empirical ones. We found that the empirical connectivity values, especially of the damaged areas, correlated better with simulated values of a healthy brain than those of an injured brain. This result indicates that the structural damage caused by an early brain injury is unlikely to have an adverse and sustained impact on the functional connections, albeit during the resting state, of damaged areas. Therefore, these areas could continue to play a role in the development of near-normal function in certain domains such as language in these children.
Resumo:
How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.
Resumo:
Les ondes lentes (OL) sur l’électroencéphalogramme caractérisent le sommeil dit lent. Leur production dépend de la synchronisation de l’activité neuronale dans un large réseau néocortical. Les OL présentent d’importants changements au cours du vieillissement, et ce, dès le milieu de l’âge adulte. L’objectif de ce mémoire est d’évaluer la contribution de l’amincissement cortical dans les modifications des caractéristiques des OL durant l’âge adulte. Notre étude montre que la densité (nb/min) et l’amplitude (µV) des OL est liée à l’épaisseur de plusieurs régions du cortex chez des sujets jeunes et âgés. Toutefois, la pente des OL (µV/s) n’a pas semblé en relation avec la neuroanatomie. Des analyses de médiation montrent que la diminution de la densité des OL chez les personnes âgés s’explique par l’amincissement de gyri frontaux et temporaux, alors que les effets de l’âge sur l’amplitude des OL s’expliquent par l’amincissement d’un ensemble plus grand de régions corticales.
Resumo:
Questo lavoro di tesi tratta il tema delle reti complesse, mostrando i principali modelli di rete complessa quali: il modello Random, il modello Small-World ed il modello Scale-free; si introdurranno alcune metriche usate per descrivere le reti complesse quali la Degree centrality, la Closeness centrality e la Betweenness centrality; si descriveranno i problemi da tenere in considerazione durante la definizione e l’implementazione di algoritmi su grafi; i modelli di calcolo su cui progettare gli algoritmi per risolvere i problemi su grafi; un’analisi prestazionale degli algoritmi proposti per calcolare i valori di Beweenness centrality su grafi di medio-grandi dimensioni. Parte di questo lavoro di tesi è consistito nello sviluppo di LANA, LArge-scale Network Analyzer, un software che permette il calcolo e l’analisi di varie metriche di centralità su grafo.
Resumo:
The European Mediterranean region is governed by a characteristic climate of summer drought that is likely to increase in duration and intensity under predicted climate change. However, large-scale network analyses investigating spatial aspects of pre-instrumental drought variability for this biogeographic zone are still scarce. In this study we introduce 54 mid- to high-elevation tree-ring width (TRW) chronologies comprising 2186 individual series from pine trees (Pinus spp.). This compilation spans a 4000-km east–west transect from Spain to Turkey, and was subjected to quality control and standardization prior to the development of site chronologies. A principal component analysis (PCA) was applied to identify spatial growth patterns during the network's common period 1862–1976, and new composite TRW chronologies were developed and investigated. The PCA reveals a common variance of 19.7% over the 54 Mediterranean pine chronologies. More interestingly, a dipole pattern in growth variability is found between the western (15% explained variance) and eastern (9.6%) sites, persisting back to 1330 AD. Pine growth on the Iberian Peninsula and Italy favours warm early growing seasons, but summer drought is most critical for ring width formation in the eastern Mediterranean region. Synoptic climate dynamics that have been in operation for the last seven centuries have been identified as the driving mechanism of a distinct east–west dipole in the growth variability of Mediterranean pines.
Resumo:
We read with great interest the large-scale network meta-analysis by Kowalewski et al. comparing clinical outcomes of patients undergoing coronary artery bypass grafting (CABG) operated on using minimal invasive extracorporeal circulation (MiECC) or off-pump (OPCAB) with those undergoing surgery on conventional cardiopulmonary bypass (CPB) [1]. The authors actually integrated into single study two recently published meta-analysis comparing MiECC and OPCAB with conventional CPB, respectively [2, 3] into a single study. According to the results of this study, MiECC and OPCAB are both strongly associated with improved perioperative outcomes following CABG when compared with CABG performed on conventional CPB. The authors conclude that MiECC may represent an attractive compromise between OPCAB and conventional CPB. After carefully reading the whole manuscript, it becomes evident that the role of MiECC is clearly undervalued. Detailed statistical analysis using the surface under the cumulative ranking probabilities indicated that MiECC represented the safer and more effective intervention regarding all-cause mortality and protection from myocardial infarction, cerebral stroke, postoperative atrial fibrillation and renal dysfunction when compared with OPCAB. Even though no significant statistical differences were demonstrated between MiECC and OPCAB, the superiority of MiECC is obvious by the hierarchy of treatments in the probability analysis, which ranked MiECC as the first treatment followed by OPCAB and conventional CPB. Thus, MiECC does not represent a compromise between OPCAB and conventional CPB, but an attractive dominant technique in CABG surgery. These results are consistent with the largest published meta-analysis by Anastasiadis et al. comparing MiECC versus conventional CPB including a total of 2770 patients. A significant decrease in mortality was observed when MiECC was used, which was also associated with reduced risk of postoperative myocardial infarction and neurological events [4]. Similarly, another recent meta-analysis by Benedetto et al. compared MiECC versus OPCAB and resulted in comparable outcomes between these two surgical techniques [5]. As stated in the text, superiority of MiECC observed in the current network meta-analysis, when compared with OPCAB, could be attributed to the fact that MiECC offers the potential for complete revascularization, whereas OPCAB poses a challenge for unexperienced surgeons; especially when distal marginal branches on the lateral and/or posterior wall of the heart need revascularization. This is reflected by a significantly lower number of distal anastomoses performed in OPCAB when compared with conventional CPB. Therefore, taking into consideration the literature published up to date, including the results of the current article, we advocate that MiECC should be integrated in the clinical practice guidelines as a state-of-the-art technique and become a standard practice for perfusion in coronary revascularization surgery.
Resumo:
A number of researchers have investigated the application of neural networks to visual recognition, with much of the emphasis placed on exploiting the network's ability to generalise. However, despite the benefits of such an approach it is not at all obvious how networks can be developed which are capable of recognising objects subject to changes in rotation, translation and viewpoint. In this study, we suggest that a possible solution to this problem can be found by studying aspects of visual psychology and in particular, perceptual organisation. For example, it appears that grouping together lines based upon perceptually significant features can facilitate viewpoint independent recognition. The work presented here identifies simple grouping measures based on parallelism and connectivity and shows how it is possible to train multi-layer perceptrons (MLPs) to detect and determine the perceptual significance of any group presented. In this way, it is shown how MLPs which are trained via backpropagation to perform individual grouping tasks, can be brought together into a novel, large scale network capable of determining the perceptual significance of the whole input pattern. Finally the applicability of such significance values for recognition is investigated and results indicate that both the NILP and the Kohonen Feature Map can be trained to recognise simple shapes described in terms of perceptual significances. This study has also provided an opportunity to investigate aspects of the backpropagation algorithm, particularly the ability to generalise. In this study we report the results of various generalisation tests. In applying the backpropagation algorithm to certain problems, we found that there was a deficiency in performance with the standard learning algorithm. An improvement in performance could however, be obtained when suitable modifications were made to the algorithm. The modifications and consequent results are reported here.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
Power Systems (PS), have been affected by substantial penetration of Distributed Generation (DG) and the operation in competitive environments. The future PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage means, and provide to market agents the means to ensure a flexible and secure operation. Virtual power players (VPP) can aggregate a diversity of players, namely generators and consumers, and a diversity of energy resources, including electricity generation based on several technologies, storage and demand response. This paper proposes an artificial neural network (ANN) based methodology to support VPP resource schedule. The trained network is able to achieve good schedule results requiring modest computational means. A real data test case is presented.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores