908 resultados para Dynamic system
Resumo:
When teaching students with visual impairments educators generally rely on tactile tools to depict visual mathematical topics. Tactile media, such as embossed paper and simple manipulable materials, are typically used to convey graphical information. Although these tools are easy to use and relatively inexpensive, they are solely tactile and are not modifiable. Dynamic and interactive technologies such as pin matrices and haptic pens are also commercially available, but tend to be more expensive and less intuitive. This study aims to bridge the gap between easy-to-use tactile tools and dynamic, interactive technologies in order to facilitate the haptic learning of mathematical concepts. We developed an haptic assistive device using a Tanvas electrostatic touchscreen that provides the user with multimodal (haptic, auditory, and visual) output. Three methodological steps comprise this research: 1) a systematic literature review of the state of the art in the design and testing of tactile and haptic assistive devices, 2) a user-centered system design, and 3) testing of the system’s effectiveness via a usability study. The electrostatic touchscreen exhibits promise as an assistive device for displaying visual mathematical elements via the haptic modality.
Resumo:
Shipping noise is a threat to marine wildlife. Grey seals are benthic foragers, and thus experience acoustic noise throughout the water column, which makes them a good model species for a case study of the potential impacts of shipping noise. We used ship track data from the Celtic Sea, seal track data and a coupled ocean-acoustic modelling system to assess the noise exposure of grey seals along their tracks. It was found that the animals experience step changes in sound levels up to ~20dB at a frequency of 125Hz, and ~10dB on average over 10-1000Hz when they dive through the thermocline, particularly during summer. Our results showed large seasonal differences in the noise level experienced by the seals. These results reveal the actual noise exposure by the animals and could help in marine spatial planning.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Even though the use of recommender systems is already widely spread in several application areas, there is still a lack of studies for accessibility research field. One of these attempts to use recommender system benefits for accessibility needs is Vulcanus. The Vulcanus recommender system uses similarity analysis to compare user’s trails. In this way, it is possible to take advantage of the user’s past behavior and distribute personalized content and services. The Vulcanus combined concepts from ubiquitous computing, such as user profiles, context awareness, trails management, and similarity analysis. It uses two different approaches for trails similarity analysis: resources patterns and categories patterns. In this work we performed an asymptotic analysis, identifying Vulcanus’ algorithm complexity. Furthermore we also propose improvements achieved by dynamic programming technique, so the ordinary case is improved by using a bottom-up approach. With that approach, many unnecessary comparisons can be skipped and now Vulcanus 2.0 is presented with improvements in its average case scenario.
Resumo:
Permeability of a rock is a dynamic property that varies spatially and temporally. Fractures provide the most efficient channels for fluid flow and thus directly contribute to the permeability of the system. Fractures usually form as a result of a combination of tectonic stresses, gravity (i.e. lithostatic pressure) and fluid pressures. High pressure gradients alone can cause fracturing, the process which is termed as hydrofracturing that can determine caprock (seal) stability or reservoir integrity. Fluids also transport mass and heat, and are responsible for the formation of veins by precipitating minerals within open fractures. Veining (healing) thus directly influences the rock’s permeability. Upon deformation these closed factures (veins) can refracture and the cycle starts again. This fracturing-healing-refacturing cycle is a fundamental part in studying the deformation dynamics and permeability evolution of rock systems. This is generally accompanied by fracture network characterization focusing on network topology that determines network connectivity. Fracture characterization allows to acquire quantitative and qualitative data on fractures and forms an important part of reservoir modeling. This thesis highlights the importance of fracture-healing and veins’ mechanical properties on the deformation dynamics. It shows that permeability varies spatially and temporally, and that healed systems (veined rocks) should not be treated as fractured systems (rocks without veins). Field observations also demonstrate the influence of contrasting mechanical properties, in addition to the complexities of vein microstructures that can form in low-porosity and permeability layered sequences. The thesis also presents graph theory as a characterization method to obtain statistical measures on evolving network connectivity. It also proposes what measures a good reservoir should have to exhibit potentially large permeability and robustness against healing. The results presented in the thesis can have applications for hydrocarbon and geothermal reservoir exploration, mining industry, underground waste disposal, CO2 injection or groundwater modeling.
Resumo:
Abstract. The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we collate the algorithms used, the development of the systems and the outcome of their implementation. It provides an introduction and review of the key developments within this field, in addition to making suggestions for future research.
Resumo:
The generation of functional, vascularized tissues is a key challenge for the field of tissue engineering. Before clinical implantations of tissue engineered bone constructs can succeed, in vitro fabrication needs to address limitations in large-scale tissue development, including controlled osteogenesis and an inadequate vasculature network to prevent necrosis of large constructs. The tubular perfusion system (TPS) bioreactor is an effective culturing method to augment osteogenic differentiation and maintain viability of human mesenchymal stem cell (hMSC)-seeded scaffolds while they are developed in vitro. To further enhance this process, we developed a novel osteogenic growth factors delivery system for dynamically cultured hMSCs using microparticles encapsulated in three-dimensional alginate scaffolds. In light of this increased differentiation, we characterized the endogenous cytokine distribution throughout the TPS bioreactor. An advantageous effect in the ‘outlet’ portion of the uniaxial growth chamber was discovered due to the system’s downstream circulation and the unique modular aspect of the scaffolds. This unique trait allowed us to carefully tune the differentiation behavior of specific cell populations. We applied the knowledge gained from the growth profile of the TPS bioreactor to culture a high-volume bone composite in a 3D-printed femur mold. This resulted in a tissue engineered bone construct with a volume of 200cm3, a 20-fold increase over previously reported sizes. We demonstrated high viability of the cultured cells throughout the culture period as well as early signs of osteogenic differentiation. Taking one step closer toward a viable implant and minimize tissue necrosis after implantation, we designed a composite construct by coculturing endothelial cells (ECs) and differentiating hMSCs, encouraging prevascularization and anastomosis of the graft with the host vasculature. We discovered the necessity of cell to cell proximity between the two cell types as well as preference for the natural cell binding capabilities of hydrogels like collagen. Notably, the results suggested increased osteogenic and angiogenic potential of the encapsulated cells when dynamically cultured in the TPS bioreactor, suggesting a synergistic effect between coculture and applied shear stress. This work highlights the feasibility of fabricating a high-volume, prevascularized tissue engineered bone construct for the regeneration of a critical size defect.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Resumo:
The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we review the algorithms used, the development of the systems and the outcome of their implementation. We provide an introduction and analysis of the key developments within this field, in addition to making suggestions for future research.
Resumo:
Abstract. The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we collate the algorithms used, the development of the systems and the outcome of their implementation. It provides an introduction and review of the key developments within this field, in addition to making suggestions for future research.
Resumo:
Over the past decade Surface Plasmon Resonance (SPR) techniques have been applied to the measurement of numerous analytes. In this article, an SPR biosensor system deployed from an oceanographic vessel was used to measure dissolved domoic acid (DA), a common and harmful phycotoxin produced by certain microalgae species belonging to the genus Pseudo-nitzschia. During the biosensor deployment, concentrations of Pseudo-nitzschia cells were very low over the study area and measured DA concentrations were below detection. However, the in situ operational detection limit of the system was established using calibrated seawater solutions spiked with DA. The system could detect the toxin at concentrations as low as 0.1 ng mL−1 and presented a linear dynamic range from 0.1 ng mL−1 to 2.0 ng mL−1. This sensor showed promise for in situ detection of DA.
Resumo:
The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we review the algorithms used, the development of the systems and the outcome of their implementation. We provide an introduction and analysis of the key developments within this field, in addition to making suggestions for future research.
Resumo:
In this work, we introduce a new class of numerical schemes for rarefied gas dynamic problems described by collisional kinetic equations. The idea consists in reformulating the problem using a micro-macro decomposition and successively in solving the microscopic part by using asymptotic preserving Monte Carlo methods. We consider two types of decompositions, the first leading to the Euler system of gas dynamics while the second to the Navier-Stokes equations for the macroscopic part. In addition, the particle method which solves the microscopic part is designed in such a way that the global scheme becomes computationally less expensive as the solution approaches the equilibrium state as opposite to standard methods for kinetic equations which computational cost increases with the number of interactions. At the same time, the statistical error due to the particle part of the solution decreases as the system approach the equilibrium state. This causes the method to degenerate to the sole solution of the macroscopic hydrodynamic equations (Euler or Navier-Stokes) in the limit of infinite number of collisions. In a last part, we will show the behaviors of this new approach in comparisons to standard Monte Carlo techniques for solving the kinetic equation by testing it on different problems which typically arise in rarefied gas dynamic simulations.