900 resultados para Local computer network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term potentiation in the neonatal rat rnbarrel cortex in vivo rnLong-term potentiation (LTP) is important for the activity-dependent formation of early cortical circuits. In the neonatal rodent barrel cortex LTP has been so far only studied in vitro. I combined voltage-sensitive dye imaging with extracellular multi-electrode recordings to study whisker stimulation-induced LTP for both the slope of field potential and the number of multi-unit activity in the whisker-to-barrel cortex pathway of the neonatal rat barrel cortex in vivo. Single whisker stimulation at 2 Hz for 10 min induced an age-dependent expression of LTP in postnatal day (P) 0 to P14 rats with the strongest expression of LTP at P3-P5. The magnitude of LTP was largest in the stimulated barrel-related column, smaller in the surrounding septal region and no LTP could be observed in the neighboring barrel. Current source density analyses revealed an LTP-associated increase of synaptic current sinks in layer IV / lower layer II/III at P3-P5 and in the cortical plate / upper layer V at P0-P1. This study demonstrates for the first time an age-dependent and spatially confined LTP in the barrel cortex of the newborn rat in vivo. These activity-dependent modifications during the critical period may play an important role in the development and refinement of the topographic map in the barrel cortex. (An et al., 2012)rnEarly motor activity triggered by gamma and spindle bursts in neonatal rat motor cortexrnSelf-generated neuronal activity generated in subcortical regions drives early spontaneous motor activity, which is a hallmark of the developing sensorimotor system. However, the neuronal activity patterns and functions of neonatal primary motor cortex (M1) in the early movements are still unknown. I combined voltage-sensitive dye imaging with simultaneous extracellular multi-electrode recordings in the neonatal rat S1 and M1 in vivo. At P3-P5, gamma and spindle bursts observed in M1 could trigger early paw movements. Furthermore, the paw movements could be also elicited by the focal electrical stimulation of M1 at layer V. Local inactivation of M1 could significantly attenuate paw movements, suggesting that the neonatal M1 operates in motor mode. In contrast, the neonatal M1 can also operate in sensory mode. Early spontaneous movements and sensory stimulations of paw trigger gamma and spindle bursts in M1. Blockade of peripheral sensory input from the paw completely abolished sensory evoked gamma and spindle bursts. Moreover, both sensory evoked and spontaneously occurring gamma and spindle bursts mediated interactions between S1 and M1. Accordingly, local inactivation of the S1 profoundly reduced paw stimulation-induced and spontaneously occurring gamma and spindle bursts in M1, indicating that S1 plays a critical role in generation of the activity patterns in M1. This study proposes that both self-generated and sensory evoked gamma and spindle bursts in M1 may contribute to the refinement and maturation of corticospinal and sensorimotor networks required for sensorimotor coordination.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obbiettivo di questa tesi è quello di studiare alcune proprietà statistiche di un random walk su network. Dopo aver definito il concetto di network e di random walk su network, sono state studiate le caratteristiche dello stato stazionario di questo sistema, la loro dipendenza dalla topologia della rete e l'andamento del sistema verso l'equilibrio, con particolare interesse per la distribuzione delle fluttuazioni delle popolazioni sui differenti nodi, una volta raggiunto lo stato stazionario. In seguito, si è voluto osservare il comportamento del network sottoposto ad una forzatura costante, rappresentata da sorgenti e pozzi applicati in diversi nodi, e quindi la sua suscettività a perturbazioni esterne. Tramite alcune simulazioni al computer, viene provato che una forzatura esterna modifica in modo diverso lo stato del network in base alla topologia di quest'ultimo. Dai risultati si è trovato quali sono i nodi che, una volta perturbati, sono in grado di cambiare ampiamente lo stato generale del sistema e quali lo influenzano in minima parte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topographically organized neurons represent multiple stimuli within complex visual scenes and compete for subsequent processing in higher visual centers. The underlying neural mechanisms of this process have long been elusive. We investigate an experimentally constrained model of a midbrain structure: the optic tectum and the reciprocally connected nucleus isthmi. We show that a recurrent antitopographic inhibition mediates the competitive stimulus selection between distant sensory inputs in this visual pathway. This recurrent antitopographic inhibition is fundamentally different from surround inhibition in that it projects on all locations of its input layer, except to the locus from which it receives input. At a larger scale, the model shows how a focal top-down input from a forebrain region, the arcopallial gaze field, biases the competitive stimulus selection via the combined activation of a local excitation and the recurrent antitopographic inhibition. Our findings reveal circuit mechanisms of competitive stimulus selection and should motivate a search for anatomical implementations of these mechanisms in a range of vertebrate attentional systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the long time dynamics of a strong glass former, SiO2, below the glass transition temperature by averaging single-particle trajectories over time windows which comprise roughly 100 particle oscillations. The structure on this coarse-grained time scale is very well defined in terms of coordination numbers, allowing us to identify ill-coordinated atoms, which are called defects in the following. The most numerous defects are O-O neighbors, whose lifetimes are comparable to the equilibration time at low temperature. On the other hand, SiO and OSi defects are very rare and short lived. The lifetime of defects is found to be strongly temperature dependent, consistent with activated processes. Single-particle jumps give rise to local structural rearrangements. We show that in SiO2 these structural rearrangements are coupled to the creation or annihilation of defects, giving rise to very strong correlations of jumping atoms and defects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Region-specific empirically based ground-truth (EBGT) criteria used to estimate the epicentral-location accuracy of seismic events have been developed for the Main Ethiopian Rift and the Tibetan plateau. Explosions recorded during the Ethiopia-Afar Geoscientific Lithospheric Experiment (EAGLE), the International Deep Profiling of Tibet, and the Himalaya (INDEPTH III) experiment provided the necessary GT0 reference events. In each case, the local crustal structure is well known and handpicked arrival times were available, facilitating the establishment of the location accuracy criteria through the stochastic forward modeling of arrival times for epicentral locations. In the vicinity of the Main Ethiopian Rift, a seismic event is required to be recorded on at least 8 stations within the local Pg/Pn crossover distance and to yield a network-quality metric of less than 0.43 in order to be classified as EBGT5(95%) (GT5 with 95% confidence). These criteria were subsequently used to identify 10 new GT5 events with magnitudes greater than 2.1 recorded on the Ethiopian Broadband Seismic Experiment (EBSE) network and 24 events with magnitudes greater than 2.4 recorded on the EAGLE broadband network. The criteria for the Tibetan plateau are similar to the Ethiopia criteria, yet slightly less restrictive as the network-quality metric needs to be less than 0.45. Twenty-seven seismic events with magnitudes greater than 2.5 recorded on the INDEPTH III network were identified as GT5 based on the derived criteria. When considered in conjunction with criteria developed previously for the Kaapvaal craton in southern Africa, it is apparent that increasing restrictions on the network-quality metric mirror increases in the complexity of geologic structure from craton to plateau to rift. Accession Number: WOS:000322569200012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simulation Automation Framework for Experiments (SAFE) streamlines the de- sign and execution of experiments with the ns-3 network simulator. SAFE ensures that best practices are followed throughout the workflow a network simulation study, guaranteeing that results are both credible and reproducible by third parties. Data analysis is a crucial part of this workflow, where mistakes are often made. Even when appearing in highly regarded venues, scientific graphics in numerous network simulation publications fail to include graphic titles, units, legends, and confidence intervals. After studying the literature in network simulation methodology and in- formation graphics visualization, I developed a visualization component for SAFE to help users avoid these errors in their scientific workflow. The functionality of this new component includes support for interactive visualization through a web-based interface and for the generation of high-quality, static plots that can be included in publications. The overarching goal of my contribution is to help users create graphics that follow best practices in visualization and thereby succeed in conveying the right information about simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study summarises all the accessible data on old German chemical weapons dumped in the Baltic Sea. Mr. Goncharov formulated a concept of ecological impact evaluation of chemical warfare agents (CWA) on the marine environment and structured a simulation model adapted to the specific character of the hydrological condition and hydrobiological subjects of the Bornholm Deep. The mathematical model he has created describes the spreading of contaminants by currents and turbulence in the near bottom boundary layer. Parameters of CWA discharge through corrosion of canisters were given for various kinds of bottom sediments with allowance for current velocity. He created a method for integral estimations and a computer simulation model and completed a forecast for CWA "Mustard", which showed that in normal hydrometeorological conditions there are local toxic plumes drifting along the bottom for a distance of up to several kilometres. With storm winds the toxic plumes from separate canisters interflow and lengthen and can reach fishery areas near Bornholm Island. When salt water from the North Sea flows in, the length of toxic zones can increase up to and over 100 kilometres and toxic water masses can spread into the northern Baltic. On this basis, Mr. Goncharov drew up recommendations to reduce dangers for human ecology and proposed the creation of a special system for the forecasting and remote sensing of the environmental conditions of CWA burial places.