952 resultados para Network architecture
Resumo:
In the paper learning algorithm for adjusting weight coefficients of the Cascade Neo-Fuzzy Neural Network (CNFNN) in sequential mode is introduced. Concerned architecture has the similar structure with the Cascade-Correlation Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN consists of neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of online learning algorithm allows to process input data sequentially in real time mode.
Resumo:
In a Ubiquitous Consumer Wireless World (UCWW) environment the provision, administration and management of the authentication, authorization and accounting (AAA) policies and business services are provided by third-party AAA service providers (3P-AAA-SPs) who are independent of the wireless access network providers (ANPs). In this environment the consumer can freely choose any suitable ANP, based on his/her own preferences. This new AAA infrastructural arrangement necessitates assessing the impact and re-thinking the design, structure and location of ‘charging and billing’ (C&B) functions and services. This paper addresses C&B issues in UCWW, proposing potential architectural solutions for C&B realization. Implementation approaches of these novel solutions together with a software testbed for validation and performance evaluation are addressed.
Resumo:
Fibre-to-the-premises (FTTP) has been long sought as the ultimate solution to satisfy the demand for broadband access in the foreseeable future, and offer distance-independent data rate within access network reach. However, currently deployed FTTP networks have in most cases only replaced the transmission medium, without improving the overall architecture, resulting in deployments that are only cost efficient in densely populated areas (effectively increasing the digital divide). In addition, the large potential increase in access capacity cannot be matched by a similar increase in core capacity at competitive cost, effectively moving the bottleneck from access to core. DISCUS is a European Integrated Project that, building on optical-centric solutions such as Long-Reach Passive Optical access and flat optical core, aims to deliver a cost-effective architecture for ubiquitous broadband services. One of the key features of the project is the end-to-end approach, which promises to deliver a complete network design and a conclusive analysis of its economic viability. © 2013 IEEE.
Resumo:
There is growing pressure to ensure that future broadband networks are both super fast and ubiquitously available to all users without the need for large government subsidies, this requires a radical change to network architectures. © OSA 2013.
Resumo:
Purpose: The human retinal vasculature has been demonstrated to exhibit fractal, or statistically self similar properties. Fractal analysis offers a simple quantitative method to characterise the complexity of the branching vessel network in the retina. Several methods have been proposed to quantify the fractal properties of the retina. Methods: Twenty five healthy volunteers underwent retinal photography, retinal oximetry and ocular biometry. A robust method to evaluate the fractal properties of the retinal vessels is proposed; it consists of manual vessel segmentation and box counting of 50 degree retinal photographs centred on the fovea. Results: Data is presented on the associations between the fractal properties of the retinal vessels and various functional properties of the retina. Conclusion Fractal properties of the retina could offer a promising tool to assess the risk and prognostic factors that define retinal disease. Outstanding efforts surround the need to adopt a standardised protocol for assessing the fractal properties of the retina, and further demonstrate its association with disease processes.
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
Dynamically reconfigurable time-division multiplexing (TDM) dense wavelength division multiplexing (DWDM) long-reach passive optical networks (PONs) can support the reduction of nodes and network interfaces by enabling a fully meshed flat optical core. In this paper we demonstrate the flexibility of the TDM-DWDM PON architecture, which can enable the convergence of multiple service types on a single physical layer. Heterogeneous services and modulation formats, i.e. residential 10G PON channels, business 100G dedicated channel and wireless fronthaul, are demonstrated co-existing on the same long reach TDM-DWDM PON system, with up to 100km reach, 512 users and emulated system load of 40 channels, employing amplifier nodes with either erbium doped fiber amplifiers (EDFAs) or semiconductor optical amplifiers (SOAs). For the first time end-to-end software defined networking (SDN) management of the access and core network elements is also implemented and integrated with the PON physical layer in order to demonstrate two service use cases: a fast protection mechanism with end-to-end service restoration in the case of a primary link failure; and dynamic wavelength allocation (DWA) in response to an increased traffic demand.
Resumo:
Monitoring and tracking of IP traffic flows are essential for network services (i.e. packet forwarding). Packet header lookup is the main part of flow identification by determining the predefined matching action for each incoming flow. In this paper, an improved header lookup and flow rule update solution is investigated. A detailed study of several well-known lookup algorithms reveals that searching individual packet header field and combining the results achieve high lookup speed and flexibility. The proposed hybrid lookup architecture is comprised of various lookup algorithms, which are selected based on the user applications and system requirements.
Resumo:
Network security monitoring remains a challenge. As global networks scale up, in terms of traffic, volume and speed, effective attribution of cyber attacks is increasingly difficult. The problem is compounded by a combination of other factors, including the architecture of the Internet, multi-stage attacks and increasing volumes of nonproductive traffic. This paper proposes to shift the focus of security monitoring from the source to the target. Simply put, resources devoted to detection and attribution should be redeployed to efficiently monitor for targeting and prevention of attacks. The effort of detection should aim to determine whether a node is under attack, and if so, effectively prevent the attack. This paper contributes by systematically reviewing the structural, operational and legal reasons underlying this argument, and presents empirical evidence to support a shift away from attribution to favour of a target-centric monitoring approach. A carefully deployed set of experiments are presented and a detailed analysis of the results is achieved.
Resumo:
Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
La Banque mondiale propose la bonne gouvernance comme la stratégie visant à corriger les maux de la mauvaise gouvernance et de faciliter le développement dans les pays en développement (Carayannis, Pirzadeh, Popescu & 2012; & Hilyard Wilks 1998; Leftwich 1993; Banque mondiale, 1989). Dans cette perspective, la réforme institutionnelle et une arène de la politique publique plus inclusive sont deux stratégies critiques qui visent à établir la bonne gouvernance, selon la Banque et d’autres institutions de Bretton Woods. Le problème, c’est que beaucoup de ces pays en voie de développement ne possèdent pas l’architecture institutionnelle préalable à ces nouvelles mesures. Cette thèse étudie et explique comment un état en voie de développement, le Commonwealth de la Dominique, s’est lancé dans un projet de loi visant l’intégrité dans la fonction publique. Cette loi, la Loi sur l’intégrité dans la fonction publique (IPO) a été adoptée en 2003 et mis en œuvre en 2008. Cette thèse analyse les relations de pouvoir entre les acteurs dominants autour de évolution de la loi et donc, elle emploie une combinaison de technique de l’analyse des réseaux sociaux et de la recherche qualitative pour répondre à la question principale: Pourquoi l’État a-t-il développé et mis en œuvre la conception actuelle de la IPO (2003)? Cette question est d’autant plus significative quand nous considérons que contrairement à la recherche existante sur le sujet, l’IPO dominiquaise diverge considérablement dans la structure du l’IPO type idéal. Nous affirmons que les acteurs "rationnels," conscients de leur position structurelle dans un réseau d’acteurs, ont utilisé leurs ressources de pouvoir pour façonner l’institution afin qu’elle serve leurs intérêts et ceux et leurs alliés. De plus, nous émettons l’hypothèse que: d’abord, le choix d’une agence spécialisée contre la corruption et la conception ultérieure de cette institution reflètent les préférences des acteurs dominants qui ont participé à la création de ladite institution et la seconde, notre hypothèse rivale, les caractéristiques des modèles alternatifs d’institutions de l’intégrité publique sont celles des acteurs non dominants. Nos résultats sont mitigés. Le jeu de pouvoir a été limité à un petit groupe d’acteurs dominants qui ont cherché à utiliser la création de la loi pour assurer leur légitimité et la survie politique. Sans surprise, aucun acteur n’a avancé un modèle alternatif. Nous avons conclu donc que la loi est la conséquence d’un jeu de pouvoir partisan. Cette recherche répond à la pénurie de recherche sur la conception des institutions de l’intégrité publique, qui semblent privilégier en grande partie un biais organisationnel et structurel. De plus, en étudiant le sujet du point de vue des relations de pouvoir (le pouvoir, lui-même, vu sous l’angle actanciel et structurel), la thèse apporte de la rigueur conceptuelle, méthodologique, et analytique au discours sur la création de ces institutions par l’étude de leur genèse des perspectives tant actancielles que structurelles. En outre, les résultats renforcent notre capacité de prédire quand et avec quelle intensité un acteur déploierait ses ressources de pouvoir.