991 resultados para Burroughs D-machine (Computer)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on QoS routing with protection in an MPLS network over an optical layer. In this multi-layer scenario each layer deploys its own fault management methods. A partially protected optical layer is proposed and the rest of the network is protected at the MPLS layer. New protection schemes that avoid protection duplications are proposed. Moreover, this paper also introduces a new traffic classification based on the level of reliability. The failure impact is evaluated in terms of recovery time depending on the traffic class. The proposed schemes also include a novel variation of minimum interference routing and shared segment backup computation. A complete set of experiments proves that the proposed schemes are more efficient as compared to the previous ones, in terms of resources used to protect the network, failure impact and the request rejection ratio

Relevância:

100.00% 100.00%

Publicador:

Resumo:

All-Optical Label Swapping (AOLS) es una tecnología clave para la implementación de nodos de conmutación completamente óptica de paquetes. Sin embargo, el costo de su desarrollo es proporcional al tamaño del espacio de etiquetas (label space). Debido a que los principios de funcionamiento de AOLS son casos particulares de los del MultiProtocol Label Switching (MPLS), esta tesis estudia métodos generales, aplicables a ambos, con el propósito de reducir el espacio de etiquetas tanto como sea posible. Modelos de programación lineal entera y heurísticas son propuestos para el caso en el que se permite apilar una etiqueta extra. Encontramos que cerca del 50% del espacio de etiquetas puede ser reducido, si se permite colocar una etiqueta extra en la pila. Además, particularmente para AOLS, encontramos que se puede reducir el espacio de etiquetas cerca al 25% si se duplica la capacidad de los enlaces y se permite re-encaminar el tráfico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to use algorithms known as Boltzmann Machine to rebuild and classify patterns as images. This algorithm has a similar structure to that of an Artificial Neural Network but network nodes have stochastic and probabilistic decisions. This work presents the theoretical framework of the main Artificial Neural Networks, General Boltzmann Machine algorithm and a variation of this algorithm known as Restricted Boltzmann Machine. Computer simulations are performed comparing algorithms Artificial Neural Network Backpropagation with these algorithms Boltzmann General Machine and Machine Restricted Boltzmann. Through computer simulations are analyzed executions times of the different described algorithms and bit hit percentage of trained patterns that are later reconstructed. Finally, they used binary images with and without noise in training Restricted Boltzmann Machine algorithm, these images are reconstructed and classified according to the bit hit percentage in the reconstruction of the images. The Boltzmann machine algorithms were able to classify patterns trained and showed excellent results in the reconstruction of the standards code faster runtime and thus can be used in applications such as image recognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of high efficacy, long lifespan, and environment-friendly operation, LED lighting devices become more and more popular in every part of our life, such as ornament/interior lighting, outdoor lightings and flood lighting. The LED driver is the most critical part of the LED lighting fixture. It heavily affects the purchasing cost, operation cost as well as the light quality. Design a high efficiency, low component cost and flicker-free LED driver is the goal. The conventional single-stage LED driver can achieve low cost and high efficiency. However, it inevitably produces significant twice-line-frequency lighting flicker, which adversely affects our health. The conventional two-stage LED driver can achieve flicker-free LED driving at the expenses of significantly adding component cost, design complexity and low the efficiency. The basic ripple cancellation LED driving method has been proposed in chapter three. It achieves a high efficiency and a low component cost as the single-stage LED driver while also obtaining flicker-free LED driving performance. The basic ripple cancellation LED driver is the foundation of the entire thesis. As the research evolving, another two ripple cancellation LED drivers has been developed to improve different aspects of the basic ripple cancellation LED driver design. The primary side controlled ripple cancellation LED driver has been proposed in chapter four to further reduce cost on the control circuit. It eliminates secondary side compensation circuit and an opto-coupler in design while at the same time maintaining flicker-free LED driving. A potential integrated primary side controller can be designed based on the proposed LED driving method. The energy channeling ripple cancellation LED driver has been proposed in chapter five to further reduce cost on the power stage circuit. In previous two ripple cancellation LED drivers, an additional DC-DC converter is needed to achieve ripple cancellation. A power transistor has been used in the energy channeling ripple cancellation LED driving design to successfully replace a separate DC-DC converter and therefore achieved lower cost. The detailed analysis supports the theory of the proposed ripple cancellation LED drivers. Simulation and experiment have also been included to verify the proposed ripple cancellation LED drivers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atrial fibrillation (AF) is a major global health issue as it is the most prevalent sustained supraventricular arrhythmia. Catheter-based ablation of some parts of the atria is considered an effective treatment of AF. The main objective of this research is to analyze atrial intracardiac electrograms (IEGMs) and extract insightful information for the ablation therapy. Throughout this thesis we propose several computationally efficient algorithms that take streams of IEGMs from different atrial sites as the input signals, sequentially analyze them in various domains (e.g., time and frequency), and create color-coded three-dimensional map of the atria to be used in the ablation therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Dans ce travail, nous explorons la faisabilité de doter les machines de la capacité de prédire, dans un contexte d'interaction homme-machine (IHM), l'émotion d'un utilisateur, ainsi que son intensité, de manière instantanée pour une grande variété de situations. Plus spécifiquement, une application a été développée, appelée machine émotionnelle, capable de «comprendre» la signification d'une situation en se basant sur le modèle théorique d'évaluation de l'émotion Ortony, Clore et Collins (OCC). Cette machine est apte, également, à prédire les réactions émotionnelles des utilisateurs, en combinant des versions améliorées des k plus proches voisins et des réseaux de neurones. Une procédure empirique a été réalisée pour l'acquisition des données. Ces dernières ont fourni une connaissance consistante aux algorithmes d'apprentissage choisis et ont permis de tester la performance de la machine. Les résultats obtenus montrent que la machine émotionnelle proposée est capable de produire de bonnes prédictions. Une telle réalisation pourrait encourager son utilisation future dans des domaines exploitant la reconnaissance automatique de l'émotion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer tomography has been used to image and reconstruct in 3-D an Egyptian mummy from the collection of the British Museum. This study of Tjentmutengebtiu, a priestess from the 22nd dynasty (945-715 BC) revealed invaluable information of a scientific, Egyptological and palaeopathological nature without mutilation and destruction of the painted cartonnage case or linen wrappings. Precise details on the removal of the brain through the nasal cavity and the viscera from the abdominal cavity were obtained. The nature and composition of the false eyes were investigated. The detailed analysis of the teeth provided a much closer approximation of age at death. The identification of materials used for the various amulets including that of the figures placed in the viscera was graphically demonstrated using this technique.