922 resultados para Dynamic manufacturing networks
Resumo:
This work clarifies the relationship between network circuit (topology) and behavior (information transmission and synchronization) in active networks, e. g. neural networks. As an application, we show how to determine a network topology that is optimal for information transmission. By optimal, we mean that the network is able to transmit a large amount of information, it possesses a large number of communication channels, and it is robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
Selective oxidation is one of the simplest functionalization methods and essentially all monomers used in manufacturing artificial fibers and plastics are obtained by catalytic oxidation processes. Formally, oxidation is considered as an increase in the oxidation number of the carbon atoms, then reactions such as dehydrogenation, ammoxidation, cyclization or chlorination are all oxidation reactions. In this field, most of processes for the synthesis of important chemicals used vanadium oxide-based catalysts. These catalytic systems are used either in the form of multicomponent mixed oxides and oxysalts, e.g., in the oxidation of n-butane (V/P/O) and of benzene (supported V/Mo/O) to maleic anhydride, or in the form of supported metal oxide, e.g., in the manufacture of phthalic anhydride by o-xylene oxidation, of sulphuric acid by oxidation of SO2, in the reduction of NOx with ammonia and in the ammoxidation of alkyl aromatics. In addition, supported vanadia catalysts have also been investigated for the oxidative dehydrogenation of alkanes to olefins , oxidation of pentane to maleic anhydride and the selective oxidation of methanol to formaldehyde or methyl formate [1]. During my PhD I focused my work on two gas phase selective oxidation reactions. The work was done at the Department of Industrial Chemistry and Materials (University of Bologna) in collaboration with Polynt SpA. Polynt is a leader company in the development, production and marketing of catalysts for gas-phase oxidation. In particular, I studied the catalytic system for n-butane oxidation to maleic anhydride (fluid bed technology) and for o-xylene oxidation to phthalic anhydride. Both reactions are catalyzed by systems based on vanadium, but catalysts are completely different. Part A is dedicated to the study of V/P/O catalyst for n-butane selective oxidation, while in the Part B the results of an investigation on TiO2-supported V2O5, catalyst for o-xylene oxidation are showed. In Part A, a general introduction about the importance of maleic anhydride, its uses, the industrial processes and the catalytic system are reported. The reaction is the only industrial direct oxidation of paraffins to a chemical intermediate. It is produced by n-butane oxidation either using fixed bed and fluid bed technology; in both cases the catalyst is the vanadyl pyrophosphate (VPP). Notwithstanding the good performances, the yield value didn’t exceed 60% and the system is continuously studied to improve activity and selectivity. The main open problem is the understanding of the real active phase working under reaction conditions. Several articles deal with the role of different crystalline and/or amorphous vanadium/phosphorous (VPO) compounds. In all cases, bulk VPP is assumed to constitute the core of the active phase, while two different hypotheses have been formulated concerning the catalytic surface. In one case the development of surface amorphous layers that play a direct role in the reaction is described, in the second case specific planes of crystalline VPP are assumed to contribute to the reaction pattern, and the redox process occurs reversibly between VPP and VOPO4. Both hypotheses are supported also by in-situ characterization techniques, but the experiments were performed with different catalysts and probably under slightly different working conditions. Due to complexity of the system, these differences could be the cause of the contradictions present in literature. Supposing that a key role could be played by P/V ratio, I prepared, characterized and tested two samples with different P/V ratio. Transformation occurring on catalytic surfaces under different conditions of temperature and gas-phase composition were studied by means of in-situ Raman spectroscopy, trying to investigate the changes that VPP undergoes during reaction. The goal is to understand which kind of compound constituting the catalyst surface is the most active and selective for butane oxidation reaction, and also which features the catalyst should possess to ensure the development of this surface (e.g. catalyst composition). On the basis of results from this study, it could be possible to project a new catalyst more active and selective with respect to the present ones. In fact, the second topic investigated is the possibility to reproduce the surface active layer of VPP onto a support. In general, supportation is a way to improve mechanical features of the catalysts and to overcome problems such as possible development of local hot spot temperatures, which could cause a decrease of selectivity at high conversion, and high costs of catalyst. In literature it is possible to find different works dealing with the development of supported catalysts, but in general intrinsic characteristics of VPP are worsened due to the chemical interaction between active phase and support. Moreover all these works deal with the supportation of VPP; on the contrary, my work is an attempt to build-up a V/P/O active layer on the surface of a zirconia support by thermal treatment of a precursor obtained by impregnation of a V5+ salt and of H3PO4. In-situ Raman analysis during the thermal treatment, as well as reactivity tests are used to investigate the parameters that may influence the generation of the active phase. Part B is devoted to the study of o-xylene oxidation of phthalic anhydride; industrially, the reaction is carried out in gas-phase using as catalysts a supported system formed by V2O5 on TiO2. The V/Ti/O system is quite complex; different vanadium species could be present on the titania surface, as a function of the vanadium content and of the titania surface area: (i) V species which is chemically bound to the support via oxo bridges (isolated V in octahedral or tetrahedral coordination, depending on the hydration degree), (ii) a polymeric species spread over titania, and (iii) bulk vanadium oxide, either amorphous or crystalline. The different species could have different catalytic properties therefore changing the relative amount of V species can be a way to optimize the catalytic performances of the system. For this reason, samples containing increasing amount of vanadium were prepared and tested in the oxidation of o-xylene, with the aim of find a correlations between V/Ti/O catalytic activity and the amount of the different vanadium species. The second part deals with the role of a gas-phase promoter. Catalytic surface can change under working conditions; the high temperatures and a different gas-phase composition could have an effect also on the formation of different V species. Furthermore, in the industrial practice, the vanadium oxide-based catalysts need the addition of gas-phase promoters in the feed stream, that although do not have a direct role in the reaction stoichiometry, when present leads to considerable improvement of catalytic performance. Starting point of my investigation is the possibility that steam, a component always present in oxidation reactions environment, could cause changes in the nature of catalytic surface under reaction conditions. For this reason, the dynamic phenomena occurring at the surface of a 7wt% V2O5 on TiO2 catalyst in the presence of steam is investigated by means of Raman spectroscopy. Moreover a correlation between the amount of the different vanadium species and catalytic performances have been searched. Finally, the role of dopants has been studied. The industrial V/Ti/O system contains several dopants; the nature and the relative amount of promoters may vary depending on catalyst supplier and on the technology employed for the process, either a single-bed or a multi-layer catalytic fixed-bed. Promoters have a quite remarkable effect on both activity and selectivity to phthalic anhydride. Their role is crucial, and the proper control of the relative amount of each component is fundamental for the process performance. Furthermore, it can not be excluded that the same promoter may play different role depending on reaction conditions (T, composition of gas phase..). The reaction network of phthalic anhydride formation is very complex and includes several parallel and consecutive reactions; for this reason a proper understanding of the role of each dopant cannot be separated from the analysis of the reaction scheme. One of the most important promoters at industrial level, which is always present in the catalytic formulations is Cs. It is known that Cs plays an important role on selectivity to phthalic anhydride, but the reasons of this phenomenon are not really clear. Therefore the effect of Cs on the reaction scheme has been investigated at two different temperature with the aim of evidencing in which step of the reaction network this promoter plays its role.
Resumo:
Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Natürliche hydraulische Bruchbildung ist in allen Bereichen der Erdkruste ein wichtiger und stark verbreiteter Prozess. Sie beeinflusst die effektive Permeabilität und Fluidtransport auf mehreren Größenordnungen, indem sie hydraulische Konnektivität bewirkt. Der Prozess der Bruchbildung ist sowohl sehr dynamisch als auch hoch komplex. Die Dynamik stammt von der starken Wechselwirkung tektonischer und hydraulischer Prozesse, während sich die Komplexität aus der potentiellen Abhängigkeit der poroelastischen Eigenschaften von Fluiddruck und Bruchbildung ergibt. Die Bildung hydraulischer Brüche besteht aus drei Phasen: 1) Nukleation, 2) zeitabhängiges quasi-statisches Wachstum so lange der Fluiddruck die Zugfestigkeit des Gesteins übersteigt, und 3) in heterogenen Gesteinen der Einfluss von Lagen unterschiedlicher mechanischer oder sedimentärer Eigenschaften auf die Bruchausbreitung. Auch die mechanische Heterogenität, die durch präexistierende Brüche und Gesteinsdeformation erzeugt wird, hat großen Einfluß auf den Wachstumsverlauf. Die Richtung der Bruchausbreitung wird entweder durch die Verbindung von Diskontinuitäten mit geringer Zugfestigkeit im Bereich vor der Bruchfront bestimmt, oder die Bruchausbreitung kann enden, wenn der Bruch auf Diskontinuitäten mit hoher Festigkeit trifft. Durch diese Wechselwirkungen entsteht ein Kluftnetzwerk mit komplexer Geometrie, das die lokale Deformationsgeschichte und die Dynamik der unterliegenden physikalischen Prozesse reflektiert. rnrnNatürliche hydraulische Bruchbildung hat wesentliche Implikationen für akademische und kommerzielle Fragestellungen in verschiedenen Feldern der Geowissenschaften. Seit den 50er Jahren wird hydraulisches Fracturing eingesetzt, um die Permeabilität von Gas und Öllagerstätten zu erhöhen. Geländebeobachtungen, Isotopenstudien, Laborexperimente und numerische Analysen bestätigen die entscheidende Rolle des Fluiddruckgefälles in Verbindung mit poroelastischen Effekten für den lokalen Spannungszustand und für die Bedingungen, unter denen sich hydraulische Brüche bilden und ausbreiten. Die meisten numerischen hydromechanischen Modelle nehmen für die Kopplung zwischen Fluid und propagierenden Brüchen vordefinierte Bruchgeometrien mit konstantem Fluiddruck an, um das Problem rechnerisch eingrenzen zu können. Da natürliche Gesteine kaum so einfach strukturiert sind, sind diese Modelle generell nicht sonderlich effektiv in der Analyse dieses komplexen Prozesses. Insbesondere unterschätzen sie die Rückkopplung von poroelastischen Effekten und gekoppelte Fluid-Festgestein Prozesse, d.h. die Entwicklung des Porendrucks in Abhängigkeit vom Gesteinsversagen und umgekehrt.rnrnIn dieser Arbeit wird ein zweidimensionales gekoppeltes poro-elasto-plastisches Computer-Model für die qualitative und zum Teil auch quantitativ Analyse der Rolle lokalisierter oder homogen verteilter Fluiddrücke auf die dynamische Ausbreitung von hydraulischen Brüchen und die zeitgleiche Evolution der effektiven Permeabilität entwickelt. Das Programm ist rechnerisch effizient, indem es die Fluiddynamik mittels einer Druckdiffusions-Gleichung nach Darcy ohne redundante Komponenten beschreibt. Es berücksichtigt auch die Biot-Kompressibilität poröser Gesteine, die implementiert wurde um die Kontrollparameter in der Mechanik hydraulischer Bruchbildung in verschiedenen geologischen Szenarien mit homogenen und heterogenen Sedimentären Abfolgen zu bestimmen. Als Resultat ergibt sich, dass der Fluiddruck-Gradient in geschlossenen Systemen lokal zu Störungen des homogenen Spannungsfeldes führen. Abhängig von den Randbedingungen können sich diese Störungen eine Neuausrichtung der Bruchausbreitung zur Folge haben kann. Durch den Effekt auf den lokalen Spannungszustand können hohe Druckgradienten auch schichtparallele Bruchbildung oder Schlupf in nicht-entwässerten heterogenen Medien erzeugen. Ein Beispiel von besonderer Bedeutung ist die Evolution von Akkretionskeilen, wo die große Dynamik der tektonischen Aktivität zusammen mit extremen Porendrücken lokal starke Störungen des Spannungsfeldes erzeugt, die eine hoch-komplexe strukturelle Entwicklung inklusive vertikaler und horizontaler hydraulischer Bruch-Netzwerke bewirkt. Die Transport-Eigenschaften der Gesteine werden stark durch die Dynamik in der Entwicklung lokaler Permeabilitäten durch Dehnungsbrüche und Störungen bestimmt. Möglicherweise besteht ein enger Zusammenhang zwischen der Bildung von Grabenstrukturen und großmaßstäblicher Fluid-Migration. rnrnDie Konsistenz zwischen den Resultaten der Simulationen und vorhergehender experimenteller Untersuchungen deutet darauf hin, dass das beschriebene numerische Verfahren zur qualitativen Analyse hydraulischer Brüche gut geeignet ist. Das Schema hat auch Nachteile wenn es um die quantitative Analyse des Fluidflusses durch induzierte Bruchflächen in deformierten Gesteinen geht. Es empfiehlt sich zudem, das vorgestellte numerische Schema um die Kopplung mit thermo-chemischen Prozessen zu erweitern, um dynamische Probleme im Zusammenhang mit dem Wachstum von Kluftfüllungen in hydraulischen Brüchen zu untersuchen.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
Heart rate variability (HRV) exhibits fluctuations characterized by a power law behavior of its power spectrum. The interpretation of this nonlinear HRV behavior, resulting from interactions between extracardiac regulatory mechanisms, could be clinically useful. However, the involvement of intrinsic variations of pacemaker rate in HRV has scarcely been investigated. We examined beating variability in spontaneously active incubating cultures of neonatal rat ventricular myocytes using microelectrode arrays. In networks of mathematical model pacemaker cells, we evaluated the variability induced by the stochastic gating of transmembrane currents and of calcium release channels and by the dynamic turnover of ion channels. In the cultures, spontaneous activity originated from a mobile focus. Both the beat-to-beat movement of the focus and beat rate variability exhibited a power law behavior. In the model networks, stochastic fluctuations in transmembrane currents and stochastic gating of calcium release channels did not reproduce the spatiotemporal patterns observed in vitro. In contrast, long-term correlations produced by the turnover of ion channels induced variability patterns with a power law behavior similar to those observed experimentally. Therefore, phenomena leading to long-term correlated variations in pacemaker cellular function may, in conjunction with extracardiac regulatory mechanisms, contribute to the nonlinear characteristics of HRV.
Resumo:
Following last two years’ workshop on dynamic languages at the ECOOP conference, the Dyla 2007 workshop was a successful and popular event. As its name implies, the workshop’s focus was on dynamic languages and their applications. Topics and discussions at the workshop included macro expansion mechanisms, extension of the method lookup algorithm, language interpretation, reflexivity and languages for mobile ad hoc networks. The main goal of this workshop was to bring together different dynamic language communities and favouring cross communities interaction. Dyla 2007 was organised as a full day meeting, partly devoted to presentation of submitted position papers and partly devoted to tool demonstration. All accepted papers can be downloaded from the workshop’s web site. In this report, we provide an overview of the presentations and a summary of discussions.
Resumo:
Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios
Resumo:
Dynamic spectrum access (DSA) aims at utilizing spectral opportunities both in time and frequency domains at any given location, which arise due to variations in spectrum usage. Recently, Cognitive radios (CRs) have been proposed as a means of implementing DSA. In this work we focus on the aspect of resource management in overlaid CRNs. We formulate resource allocation strategies for cognitive radio networks (CRNs) as mathematical optimization problems. Specifically, we focus on two key problems in resource management: Sum Rate Maximization and Maximization of Number of Admitted Users. Since both the above mentioned problems are NP hard due to presence of binary assignment variables, we propose novel graph based algorithms to optimally solve these problems. Further, we analyze the impact of location awareness on network performance of CRNs by considering three cases: Full location Aware, Partial location Aware and Non location Aware. Our results clearly show that location awareness has significant impact on performance of overlaid CRNs and leads to increase in spectrum utilization effciency.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
Diseases are believed to arise from dysregulation of biological systems (pathways) perturbed by environmental triggers. Biological systems as a whole are not just the sum of their components, rather ever-changing, complex and dynamic systems over time in response to internal and external perturbation. In the past, biologists have mainly focused on studying either functions of isolated genes or steady-states of small biological pathways. However, it is systems dynamics that play an essential role in giving rise to cellular function/dysfunction which cause diseases, such as growth, differentiation, division and apoptosis. Biological phenomena of the entire organism are not only determined by steady-state characteristics of the biological systems, but also by intrinsic dynamic properties of biological systems, including stability, transient-response, and controllability, which determine how the systems maintain their functions and performance under a broad range of random internal and external perturbations. As a proof of principle, we examine signal transduction pathways and genetic regulatory pathways as biological systems. We employ widely used state-space equations in systems science to model biological systems, and use expectation-maximization (EM) algorithms and Kalman filter to estimate the parameters in the models. We apply the developed state-space models to human fibroblasts obtained from the autoimmune fibrosing disease, scleroderma, and then perform dynamic analysis of partial TGF-beta pathway in both normal and scleroderma fibroblasts stimulated by silica. We find that TGF-beta pathway under perturbation of silica shows significant differences in dynamic properties between normal and scleroderma fibroblasts. Our findings may open a new avenue in exploring the functions of cells and mechanism operative in disease development.
Resumo:
In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.