869 resultados para NETWORK DESIGN PROBLEMS
Resumo:
Le système de santé d'aujourd'hui fait appel à de nombreuses technologies de l'information nommées TIS (Technologies de l’Information en Santé). Celles-ci ont donné naissance à de nouvelles formes d’interaction médecin-patient et ont complexifié l'approche thérapeutique dite « centrée sur le patient ». Les TIS promettent une plus grande efficacité et l’augmentation de la satisfaction des patients par le biais d’une meilleure compréhension de la maladie pour le patient. Or, elles peuvent également devenir des sources de conflit pour le professionnel de la santé, étant donné leur utilisation en dehors des rencontres cliniques ainsi que leur tendance à agir comme des barrières communicationnelles lors des consultations. Cette recherche vise a étudier les critères de design nécessaires à la conception d’un TIS pouvant améliorer la relation médecin-patient et donc, faciliter la communication et améliorer l’alliance thérapeutique. L’étude utilise une approche centrée sur l’utilisateur et vise donc à comprendre les besoins et les attentes des médecins et des patients. En étudiant les nouvelles approches en santé et les TIS, il a été possible de comprendre le contexte et les besoins des utilisateurs en terme de communication. Ces derniers sont primordiaux au processus dit centré sur l’utilisateur. Le faible taux de rétention du discours du médecin devient une barrière communicationnelle importante, tout comme le temps pressurisé. La recherche nous montre que l’ajout d’un outil virtuel de vulgarisation peut, à l’aide de média visuels (tel que des modélisations, des animations 3D et des dessins), grandement aider la relation médecin-patient.
Resumo:
Dans cette thèse, nous étudions quelques problèmes fondamentaux en mathématiques financières et actuarielles, ainsi que leurs applications. Cette thèse est constituée de trois contributions portant principalement sur la théorie de la mesure de risques, le problème de l’allocation du capital et la théorie des fluctuations. Dans le chapitre 2, nous construisons de nouvelles mesures de risque cohérentes et étudions l’allocation de capital dans le cadre de la théorie des risques collectifs. Pour ce faire, nous introduisons la famille des "mesures de risque entropique cumulatifs" (Cumulative Entropic Risk Measures). Le chapitre 3 étudie le problème du portefeuille optimal pour le Entropic Value at Risk dans le cas où les rendements sont modélisés par un processus de diffusion à sauts (Jump-Diffusion). Dans le chapitre 4, nous généralisons la notion de "statistiques naturelles de risque" (natural risk statistics) au cadre multivarié. Cette extension non-triviale produit des mesures de risque multivariées construites à partir des données financiéres et de données d’assurance. Le chapitre 5 introduit les concepts de "drawdown" et de la "vitesse d’épuisement" (speed of depletion) dans la théorie de la ruine. Nous étudions ces concepts pour des modeles de risque décrits par une famille de processus de Lévy spectrallement négatifs.
Resumo:
The rapid developments in fields such as fibre optic communication engineering and integrated optical electronics have expanded the interest and have increased the expectations about guided wave optics, in which optical waveguides and optical fibres play a central role. The technology of guided wave photonics now plays a role in generating information (guided-wave sensors) and processing information (spectral analysis, analog-to-digital conversion and other optical communication schemes) in addition to its original application of transmitting information (fibre optic communication). Passive and active polymer devices have generated much research interest recently because of the versatility of the fabrication techniques and the potential applications in two important areas – short distant communication network and special functionality optical devices such as amplifiers, switches and sensors. Polymer optical waveguides and fibres are often designed to have large cores with 10-1000 micrometer diameter to facilitate easy connection and splicing. Large diameter polymer optical fibres being less fragile and vastly easier to work with than glass fibres, are attractive in sensing applications. Sensors using commercial plastic optical fibres are based on ideas already used in silica glass sensors, but exploiting the flexible and cost effective nature of the plastic optical fibre for harsh environments and throw-away sensors. In the field of Photonics, considerable attention is centering on the use of polymer waveguides and fibres, as they have a great potential to create all-optical devices. By attaching organic dyes to the polymer system we can incorporate a variety of optical functions. Organic dye doped polymer waveguides and fibres are potential candidates for solid state gain media. High power and high gain optical amplification in organic dye-doped polymer waveguide amplifier is possible due to extremely large emission cross sections of dyes. Also, an extensive choice of organic dye dopants is possible resulting in amplification covering a wide range in the visible region.
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
Antennas are necessary and vital components of communication and radar systems, but sometimes their inability to adjust to new operating scenarios can limit system performance. Reconfigurable antennas can adjust with changing system requirements or environmental conditions and provide additional levels of functionality that may result in wider instantaneous frequency bandwidths, more extensive scan volumes, and radiation patterns with more desirable side lobe distributions. Their agility and diversity created new horizons for different types of applications especially in cognitive radio, Multiple Input Multiple Output Systems, satellites and many other applications. Reconfigurable antennas satisfy the requirements for increased functionality, such as direction finding, beam steering, radar, control and command, within a confined volume. The intelligence associated with the reconfigurable antennas revolved around switching mechanisms utilized. In the present work, we have investigated frequency reconfigurable polarization diversity antennas using two methods: 1. By using low-loss, high-isolation switches such as PIN diode, the antenna can be structurally reconfigured to maintain the elements near their resonant dimensions for different frequency bands and/or polarization. 2. Secondly, the incorporation of variable capacitors or varactors, to overcome many problems faced in using switches and their biasing. The performances of these designs have been studied using standard simulation tools used in industry/academia and they have been experimentally verified. Antenna design guidelines are also deduced by accounting the resonances. One of the major contributions of the thesis lies in the analysis of the designed antennas using FDTD based numerical computation to validate their performance.
Resumo:
Rapid changes in the technological environment of marine logistics and the increasing integration of waterborne, air and land transport systems have fostered a revolution in the design and operations of transport vehicles, cargo handling technology, and terminal facilities. This in turn has caused major changes in the functions of and uses of ports. From literature, it was found that these changes were very slow in case of Indian ports and the performances of port operations were poor when compared with similar ports in the same region. It was also found that a very few studies were conducted to identify the reasons for slow improvements in the performances of Indian major ports. In this thesis, an attempt is made to find out the operational problems of Indian major ports and to analyze the reasons for it. Some solutions have also been found out using management tools
Resumo:
One of the major applications of underwater acoustic sensor networks (UWASN) is ocean environment monitoring. Employing data mules is an energy efficient way of data collection from the underwater sensor nodes in such a network. A data mule node such as an autonomous underwater vehicle (AUV) periodically visits the stationary nodes to download data. By conserving the power required for data transmission over long distances to a remote data sink, this approach extends the network life time. In this paper we propose a new MAC protocol to support a single mobile data mule node to collect the data sensed by the sensor nodes in periodic runs through the network. In this approach, the nodes need to perform only short distance, single hop transmission to the data mule. The protocol design discussed in this paper is motivated to support such an application. The proposed protocol is a hybrid protocol, which employs a combination of schedule based access among the stationary nodes along with handshake based access to support mobile data mules. The new protocol, RMAC-M is developed as an extension to the energy efficient MAC protocol R-MAC by extending the slot time of R-MAC to include a contention part for a hand shake based data transfer. The mobile node makes use of a beacon to signal its presence to all the nearby nodes, which can then hand-shake with the mobile node for data transfer. Simulation results show that the new protocol provides efficient support for a mobile data mule node while preserving the advantages of R-MAC such as energy efficiency and fairness.
Resumo:
Wireless sensor networks monitor their surrounding environment for the occurrence of some anticipated phenomenon. Most of the research related to sensor networks considers the static deployment of sensor nodes. Mobility of sensor node can be considered as an extra dimension of complexity, which poses interesting and challenging problems. Node mobility is a very important aspect in the design of effective routing algorithm for mobile wireless networks. In this work we intent to present the impact of different mobility models on the performance of the wireless sensor networks. Routing characteristics of various routing protocols for ad-hoc network were studied considering different mobility models. Performance metrics such as end-to-end delay, throughput and routing load were considered and their variations in the case of mobility models like Freeway, RPGM were studied. This work will be useful to figure out the characteristics of routing protocols depending on the mobility patterns of sensors
Resumo:
Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
This robot has low natural frequencies of vibration. Insights into the problems of designing joint and link flexibility are discussed. The robot has three flexible rotary actuators and two flexible, interchangeable links, and is controlled by three independent processors on a VMEbus. Results from experiments on the control of residual vibration for different types of robot motion are presented. Impulse prefiltering and slowly accelerating moves are compared and shown to be effective at reducing residual vibration.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation. The proposed system is completely distributed and automates the mechanisms for adapting the logical network to the offered load. The system is able to manage dynamically a logical network such as a virtual path network in ATM or a label switched path network in MPLS or GMPLS. The system design and implementation is based on a multi-agent system (MAS) which make the decisions of when and how to change a logical path. Despite the lack of a centralised global network view, results show that MAS manages the network resources effectively, reducing the connection blocking probability and, therefore, achieving better utilisation of network resources. We also include details of its architecture and implementation
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation and path restoration mechanisms. Our focus is on the dynamic bandwidth management results, although the main goal of the system is the integration of the different mechanisms that manage the reserved paths (bandwidth, restoration, and spare capacity planning). The objective is to avoid conflicts between these mechanisms. The system is able to dynamically manage a logical network such as a virtual path network in ATM or a label switch path network in MPLS. This system has been designed to be modular in the sense that in can be activated or deactivated, and it can be applied only in a sub-network. The system design and implementation is based on a multi-agent system (MAS). We also included details of its architecture and implementation
Resumo:
The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification