901 resultados para Computer network resources
Resumo:
This directory provides information related to the Iowa Computer-Assisted Network (ICAN) sites and their associated libraries. ICAN is an interlibrary loan network which uses IBM-type computers to generate and fill requests for library resources.
Resumo:
Understanding the dynamics of blood cells is a crucial element to discover biological mechanisms, to develop new efficient drugs, design sophisticated microfluidic devices, for diagnostics. In this work, we focus on the dynamics of red blood cells in microvascular flow. Microvascular blood flow resistance has a strong impact on cardiovascular function and tissue perfusion. The flow resistance in microcirculation is governed by flow behavior of blood through a complex network of vessels, where the distribution of red blood cells across vessel cross-sections may be significantly distorted at vessel bifurcations and junctions. We investigate the development of blood flow and its resistance starting from a dispersed configuration of red blood cells in simulations for different hematocrits, flow rates, vessel diameters, and aggregation interactions between red blood cells. Initially dispersed red blood cells migrate toward the vessel center leading to the formation of a cell-free layer near the wall and to a decrease of the flow resistance. The development of cell-free layer appears to be nearly universal when scaled with a characteristic shear rate of the flow, which allows an estimation of the length of a vessel required for full flow development, $l_c \approx 25D$, with vessel diameter $D$. Thus, the potential effect of red blood cell dispersion at vessel bifurcations and junctions on the flow resistance may be significant in vessels which are shorter or comparable to the length $l_c$. The presence of aggregation interactions between red blood cells lead in general to a reduction of blood flow resistance. The development of the cell-free layer thickness looks similar for both cases with and without aggregation interactions. Although, attractive interactions result in a larger cell-free layer plateau values. However, because the aggregation forces are short-ranged at high enough shear rates ($\bar{\dot{\gamma}} \gtrsim 50~\text{s}^{-1}$) aggregation of red blood cells does not bring a significant change to the blood flow properties. Also, we develop a simple theoretical model which is able to describe the converged cell-free-layer thickness with respect to flow rate assuming steady-state flow. The model is based on the balance between a lift force on red blood cells due to cell-wall hydrodynamic interactions and shear-induced effective pressure due to cell-cell interactions in flow. We expect that these results can also be used to better understand the flow behavior of other suspensions of deformable particles such as vesicles, capsules, and cells. Finally, we investigate segregation phenomena in blood as a two-component suspension under Poiseuille flow, consisting of red blood cells and target cells. The spatial distribution of particles in blood flow is very important. For example, in case of nanoparticle drug delivery, the particles need to come closer to microvessel walls, in order to adhere and bring the drug to a target position within the microvasculature. Here we consider that segregation can be described as a competition between shear-induced diffusion and the lift force that pushes every soft particle in a flow away from the wall. In order to investigate the segregation, on one hand, we have 2D DPD simulations of red blood cells and target cell of different sizes, on the other hand the Fokker-Planck equation for steady state. For the equation we measure force profile, particle distribution and diffusion constant across the channel. We compare simulation results with those from the Fokker-Planck equation and find a very good correspondence between the two approaches. Moreover, we investigate the diffusion behavior of target particles for different hematocrit values and shear rates. Our simulation results indicate that diffusion constant increases with increasing hematocrit and depends linearly on shear rate. The third part of the study describes development of a simulation model of complex vascular geometries. The development of the model is important to reproduce vascular systems of small pieces of tissues which might be gotten from MRI or microscope images. The simulation model of the complex vascular systems might be divided into three parts: modeling the geometry, developing in- and outflow boundary conditions, and simulation domain decomposition for an efficient computation. We have found that for the in- and outflow boundary conditions it is better to use the SDPD fluid than DPD one because of the density fluctuations along the channel of the latter. During the flow in a straight channel, it is difficult to control the density of the DPD fluid. However, the SDPD fluid has not that shortcoming even in more complex channels with many branches and in- and outflows because the force acting on particles is calculated also depending on the local density of the fluid.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However, as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.
Resumo:
A casual study of the hydrological map of Uganda would convince every serious fisherman and fisheater that he is most favoured to be in Uganda. The extent and distribution of the country's aquatic system plus the rich variety of fish species there is promises a fishery potential of considerable magnitude: The open waterways comprised by the Uganda portions of Lakes Victoria, Albert and Edward; and Lakes Kyoga, George plus minor lakes Wamala, Kijanebarora, mutanda, etc. occupy about 15% of the total surface area (91,000 m2; Depart. Land Survey, 1962). Most of the nation's fish supplies are currontly from this source. 1.2. A rich network of permanent and seasonal rivers and streams filling and/or emptying various water systems covers most of Uganda. This aquatic network is associated with a fish fauna whose immense significance as a source of protein is perhaps better appreciated by the local subsistance fisherman and consumer than by the fisheries scientist and manager in this country. Many species of this fish fauna have strong affinities with the open water systems while some are typically riverine. 1.3. Then there are wetlands composed mainly of expanses of swamp, but including some areas of bog. These cover about 2% of the country. While the variety of fish fauna found here is limited by the rather hostile nature of the environment (comparatively de-oxygenated under a canopy of dense stands of emergont vegetation) several specialised fishes e.g. Clarias spp. and Protpterus aethiopicus (Kamongo) occur here. Availability of permanent and seasonal sources of water, well distributed throughout most areas of Uganda, opens up immense potential for a variety of aquaculture practices. However, while active exploitation of much of these fishery resources is currently underway, important questions regarding the magnitudes of the various resource potentials and dynamics, and about suitable levels and modes of exploitation, are yet unanswered. These gaps in knowledge go about the fishery resources of Uganda would hinder formulation of adequate development and management schemes. This short paper examines some of the above problems and suggests some approaches towards balanced oxploitation and management of the fisheries of Uganda.
Resumo:
Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.
Resumo:
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.
Resumo:
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
Resumo:
The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have been emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rainfall amounts. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e. RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance, but also for use in hydrological modeling. The results show that the RCs considering measurement errors derived from laboratory experiments provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Even assuming higher uncertainties for RCs as obtained from the laboratory up to a certain level is observed practical.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile robot navigation problems is presented and tested in both real and simulated environments. The LTL consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours. These sets are then transferred to an idiotypic Artificial Immune System (AIS), which forms the STL phase, and the system is said to be seeded. The combined LTL-STL approach is compared with using STL only, and with using a handdesigned controller. In addition, the STL phase is tested when the idiotypic mechanism is turned off. The results provide substantial evidence that the best option is the seeded idiotypic system, i.e. the architecture that merges LTL with an idiotypic AIS for the STL. They also show that structurally different environments can be used for the two phases without compromising transferability.
Resumo:
This study examines the services provided by the bookmobile of SINABI-Public Libraries in rural communities visited Costa Rica during 2009 and 2010 according to the sample selected for the presentation of a proposed Mobile Library Network to Costa Rica.Each country has very heterogeneous populations and the populations in unfavorable geographical areas (rural or urban fringe areas) and areas without library service or cultural institution, they have specific information needs. By its terms can not exercise the right to information, while urban areas have greater influence and social advantage to have easy access to various information resources.The mobile library services are presented as an ideal tool to deliver library services to any population, mainly those remote communities and vulnerable state as rural areas. Bookmobile is defined as any means of transport (buses, trains, boats, motorcycles, boats, animals, etc.), which shifts documentary material.