900 resultados para Local computer network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, Brazil has pioneered an innovative model of branchless banking, known as correspondent banking, involving distribution partnership between banks, several kinds of retailers and a variety of other participants, which have allowed an unprecedented growth in bank outreach and became a reference worldwide. However, despite the extensive number of studies recently developed focusing on Brazilian branchless banking, there exists a clear research gap in the literature. It is still necessary to identify the different business configurations involving network integration through which the branchless banking channel can be structured, as well as the way they relate to the range of bank services delivered. Given this gap, our objective is to investigate the relationship between network integration models and services delivered through the branchless banking channel. Based on twenty interviews with managers involved with the correspondent banking business and data collected on almost 300 correspondent locations, our research is developed in two steps. First, we created a qualitative taxonomy through which we identified three classes of network integration models. Second, we performed a cluster analysis to explain the groups of financial services that fit each model. By contextualizing correspondents' network integration processes through the lens of transaction costs economics, our results suggest that the more suited to deliver social-oriented, "pro-poor'' services the channel is, the more it is controlled by banks. This research offers contributions to managers and policy makers interested in understanding better how different correspondent banking configurations are related with specific portfolios of services. Researchers interested in the subject of branchless banking can also benefit from the taxonomy presented and the transaction costs analysis of this kind of banking channel, which has been adopted in a number of developing countries all over the world now. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background This study compares the immediate effects of local and adjacent acupuncture on the tibialis anterior muscle and the amount of force generated or strength in Kilogram Force (KGF) evaluated by a surface electromyography. Methods The study consisted of a single blinded trial of 30 subjects assigned to two groups: local acupoint (ST36) and adjacent acupoint (SP9). Bipolar surface electrodes were placed on the tibialis anterior muscle, while a force transducer was attached to the foot of the subject and to the floor. An electromyograph (EMG) connected to a computer registered the KGF and root mean square (RMS) before and after acupuncture at maximum isometric contraction. The RMS values and surface electrodes were analyzed with Student's t-test. Results Thirty subjects were selected from a total of 56 volunteers according to specific inclusion and exclusion criteria and were assigned to one of the two groups for acupuncture. A significant decrease in the RMS values was observed in both ST36 (t = -3.80, P = 0,001) and SP9 (t = 6.24, P = 0.001) groups after acupuncture. There was a decrease in force in the ST36 group after acupuncture (t = -2.98, P = 0.006). The RMS values did not have a significant difference (t = 0.36, P = 0.71); however, there was a significant decrease in strength after acupuncture in the ST36 group compared to the SP9 group (t = 2.51, P = 0.01). No adverse events were found. Conclusion Acupuncture at the local acupoint ST36 or adjacent acupoints SP9 reduced the tibialis anterior electromyography muscle activity. However, acupuncture at SP9 did not decrease muscle strength while acupuncture at ST36 did.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Brazilian network for genotyping is composed of 21 laboratories that perform and analyze genotyping tests for all HIV-infected patients within the public system, performing approximately 25,000 tests per year. We assessed the interlaboratory and intralaboratory reproducibility of genotyping systems by creating and implementing a local external quality control evaluation. Plasma samples from HIV-1-infected individuals (with low and intermediate viral loads) or RNA viral constructs with specific mutations were used. This evaluation included analyses of sensitivity and specificity of the tests based on qualitative and quantitative criteria, which scored laboratory performance on a 100-point system. Five evaluations were performed from 2003 to 2008, with 64% of laboratories scoring over 80 points in 2003, 81% doing so in 2005, 56% in 2006, 91% in 2007, and 90% in 2008 (Kruskal-Wallis, p = 0.003). Increased performance was aided by retraining laboratories that had specific deficiencies. The results emphasize the importance of investing in laboratory training and interpretation of DNA sequencing results, especially in developing countries where public (or scarce) resources are used to manage the AIDS epidemic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Too Big to Ignore (TBTI; www.toobigtoignore.net) is a research network and knowledge mobilization partnership established to elevate the profile of small-scale fisheries (SSF), to argue against their marginalization in national and international policies, and to develop research and governance capacity to address global fisheries challenges. Network participants and partners are conducting global and comparative analyses, as well as in-depth studies of SSF in the context of local complexity and dynamics, along with a thorough examination of governance challenges, to encourage careful consideration of this sector in local, regional and global policy arenas. Comprising 15 partners and 62 researchers from 27 countries, TBTI conducts activities in five regions of the world. In Latin America and the Caribbean (LAC) region, we are taking a participative approach to investigate and promote stewardship and self-governance in SSF, seeking best practices and success stories that could be replicated elsewhere. As well, the region will focus to promote sustainable livelihoods of coastal communities. Key activities include workshops and stakeholder meetings, facilitation of policy dialogue and networking, as well as assessing local capacity needs and training. Currently, LAC members are putting together publications that examine key issues concerning SSF in the region and best practices, with a first focus on ecosystem stewardship. Other planned deliverables include comparative analysis, a regional profile on the top research issues on SSF, and a synthesis of SSF knowledge in LAC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]In this work local binary patterns based focus measures are presented. Local binary patterns (LBP) have been introduced in computer vision tasks like texture classification or face recognition. In applications where recognition is based on LBP, a computational saving can be achieved with the use of LBP in the focus measures. The behavior of the proposed measures is studied to test if they fulfill the properties of the focus measures and then a comparison with some well know focus measures is carried out in different scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development aid involves a complex network of numerous and extremely heterogeneous actors. Nevertheless, all actors seem to speak the same ‘development jargon’ and to display a congruence that extends from the donor over the professional consultant to the village chief. And although the ideas about what counts as ‘good’ and ‘bad’ aid have constantly changed over time —with new paradigms and policies sprouting every few years— the apparent congruence between actors more or less remains unchanged. How can this be explained? Is it a strategy of all actors to get into the pocket of the donor, or are the social dynamics in development aid more complex? When a new development paradigm appears, where does it come from and how does it gain support? Is this support really homogeneous? To answer the questions, a multi-sited ethnography was conducted in the sector of water-related development aid, with a focus on 3 paradigms that are currently hegemonic in this sector: Integrated Water Resources Management, Capacity Building, and Adaptation to Climate Change. The sites of inquiry were: the headquarters of a multilateral organization, the headquarters of a development NGO, and the Inner Niger Delta in Mali. The research shows that paradigm shifts do not happen overnight but that new paradigms have long lines of descent. Moreover, they require a lot of work from actors in order to become hegemonic; the actors need to create a tight network of support. Each actor, however, interprets the paradigms in a slightly different way, depending on the position in the network. They implant their own interests in their interpretation of the paradigm (the actors ‘translate’ their interests), regardless of whether they constitute the donor, a mediator, or the aid recipient. These translations are necessary to cement and reproduce the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tiefherd-Beben, die im oberen Erdmantel in einer Tiefe von ca. 400 km auftreten, werden gewöhnlich mit dem in gleicher Tiefe auftretenden druckabhängigen, polymorphen Phasenübergang von Olivine (α-Phase) zu Spinel (β-Phase) in Verbindung gebracht. Es ist jedoch nach wie vor unklar, wie der Phasenübergang mit dem mechanischen Versagen des Mantelmaterials zusammenhängt. Zur Zeit werden im Wesentlichen zwei Modelle diskutiert, die entweder Mikrostrukturen, die durch den Phasenübergang entstehen, oder aber die rheologischen Veränderungen des Mantelgesteins durch den Phasenübergang dafür verantwortlich machen. Dabei sind Untersuchungen der Olivin→Spinel Umwandlung durch die Unzugänglichkeit des natürlichen Materials vollständig auf theoretische Überlegungen sowie Hochdruck-Experimente und Numerische Simulationen beschränkt. Das zentrale Thema dieser Dissertation war es, ein funktionierendes Computermodell zur Simulation der Mikrostrukturen zu entwickeln, die durch den Phasenübergang entstehen. Des Weiteren wurde das Computer Modell angewandt um die mikrostrukturelle Entwicklung von Spinelkörnern und die Kontrollparameter zu untersuchen. Die Arbeit ist daher in zwei Teile unterteilt: Der erste Teil (Kap. 2 und 3) behandelt die physikalischen Gesetzmäßigkeiten und die prinzipielle Funktionsweise des Computer Modells, das auf der Kombination von Gleichungen zur Errechnung der kinetischen Reaktionsgeschwindigkeit mit Gesetzen der Nichtgleichgewichtsthermodynamik unter nicht-hydostatischen Bedingungen beruht. Das Computermodell erweitert ein Federnetzwerk der Software latte aus dem Programmpaket elle. Der wichtigste Parameter ist dabei die Normalspannung auf der Kornoberfläche von Spinel. Darüber hinaus berücksichtigt das Programm die Latenzwärme der Reaktion, die Oberflächenenergie und die geringe Viskosität von Mantelmaterial als weitere wesentliche Parameter in der Berechnung der Reaktionskinetic. Das Wachstumsverhalten und die fraktale Dimension von errechneten Spinelkörnern ist dabei in guter Übereinstimmung mit Spinelstrukturen aus Hochdruckexperimenten. Im zweiten Teil der Arbeit wird das Computermodell angewandt, um die Entwicklung der Oberflächenstruktur von Spinelkörnern unter verschiedenen Bedigungen zu eruieren. Die sogenannte ’anticrack theory of faulting’, die den katastrophalen Verlauf der Olivine→Spinel Umwandlung in olivinhaltigem Material unter differentieller Spannung durch Spannungskonzentrationen erklärt, wurde anhand des Computermodells untersucht. Der entsprechende Mechanismus konnte dabei nicht bestätigt werden. Stattdessen können Oberflächenstrukturen, die Ähnlichkeiten zu Anticracks aufweisen, durch Unreinheiten des Materials erklärt werden (Kap. 4). Eine Reihe von Simulationen wurde der Herleitung der wichtigsten Kontrollparameter der Reaktion in monomineralischem Olivin gewidmet (Kap. 5 and Kap. 6). Als wichtigste Einflüsse auf die Kornform von Spinel stellten sich dabei die Hauptnormalspannungen auf dem System sowie Heterogenitäten im Wirtsminerals und die Viskosität heraus. Im weiteren Verlauf wurden die Nukleierung und das Wachstum von Spinel in polymineralischen Mineralparagenesen untersucht (Kap. 7). Die Reaktionsgeschwindigkeit der Olivine→Spinel Umwandlung und die Entwicklung von Spinelnetzwerken und Clustern wird durch die Gegenwart nicht-reaktiver Minerale wie Granat oder Pyroxen erheblich beschleunigt. Die Bildung von Spinelnetzwerken hat das Potential, die mechanischen Eigenschaften von Mantelgestein erheblich zu beeinflussen, sei es durch die Bildung potentieller Scherzonen oder durch Gerüstbildung. Dieser Lokalisierungprozess des Spinelwachstums in Mantelgesteinen kann daher ein neues Erklärungsmuster für Tiefbeben darstellen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information processing and storage in the brain may be presented by the oscillations and cell assemblies. Here we address the question of how individual neurons associate together to assemble neural networks and present spontaneous electrical activity. Therefore, we dissected the neonatal brain at three different levels: acute 1-mm thick brain slice, cultured organotypic 350-µm thick brain slice and dissociated neuronal cultures. The spatio-temporal properties of neural activity were investigated by using a 60-channel Micro-electrode arrays (MEA), and the cell assemblies were studied by using a template-matching method. We find local on-propagating as well as large- scale propagating spontaneous oscillatory activity in acute slices, spontaneous network activity characterized by synchronized burst discharges in organotypic cultured slices, and autonomous bursting behaviour in dissociated neuronal cultures. Furthermore, repetitive spike patterns emerge after one week of dissociated neuronal culture and dramatically increase their numbers as well as their complexity and occurrence in the second week. Our data indicate that neurons can self-organize themselves, assembly to a neural network, present spontaneous oscillations, and emerge spatio-temporal activation patterns. The spontaneous oscillations and repetitive spike patterns may serve fundamental functions for information processing and storage in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.