857 resultados para clustering and QoS-aware routing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discrete, microscopic lesions are developed in the brain in a number of neurodegenerative diseases. These lesions may not be randomly distributed in the tissue but exhibit a spatial pattern, i.e., a departure from randomness towards regularlity or clustering. The spatial pattern of a lesion may reflect its development in relation to other brain lesions or to neuroanatomical structures. Hence, a study of spatial pattern may help to elucidate the pathogenesis of a lesion. A number of statistical methods can be used to study the spatial patterns of brain lesions. They range from simple tests of whether the distribution of a lesion departs from random to more complex methods which can detect clustering and the size, distribution and spacing of clusters. This paper reviews the uses and limitations of these methods as applied to neurodegenerative disorders, and in particular to senile plaque formation in Alzheimer's disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern compute systems continue to evolve towards increasingly complex, heterogeneous and distributed architectures. At the same time, functionality and performance are no longer the only aspects when developing applications for such systems, and additional concerns such as flexibility, power efficiency, resource usage, reliability and cost are becoming increasingly important. This does not only raise the question of how to efficiently develop applications for such systems, but also how to cope with dynamic changes in the application behaviour or the system environment. The EPiCS Project aims to address these aspects through exploring self-awareness and self-expression. Self-awareness allows systems and applications to gather and maintain information about their current state and environment, and reason about their behaviour. Self-expression enables systems to adapt their behaviour autonomously to changing conditions. Innovations in EPiCS are based on systematic integration of research in concepts and foundations, customisable hardware/software platforms and operating systems, and self-aware networking and middleware infrastructure. The developed technologies are validated in three application domains: computational finance, distributed smart cameras and interactive mobile media systems. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Limited energy is a big challenge for large scale wireless sensor networks (WSN). Previous research works show that modulation scaling is an efficient technique to reduce energy consumption. However, the impacts of using modulation scaling on packet delivery latency and loss are not considered, which may have adverse effects on the application qualities. In this paper, we study this problem and propose control schemes to minimize energy consumption while ensuring application qualities. We first analyze the relationships of modulation scaling and energy consumption, end-to-end delivery latency and packet loss ratio. With the analytical model, we develop a centralized control scheme to adaptively adjust the modulation levels, in order to minimize energy consumption and ensure the application qualities. To improve the scalability of the centralized control scheme, we also propose a distributed control scheme. In this scheme, the sink will send the differences between the required and measured application qualities to the sensors. The sensors will update their modulation levels with the local information and feedback from the sink. Experimental results show the effectiveness of energy saving and QoS guarantee of the control schemes. The control schemes can adapt efficiently to the time-varying requirements on application qualities. Copyright © 2005 The Institute of Electronics, Information and Communication Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification of MHC molecules into supertypes in terms of peptide-binding specificities is an important issue, with direct implications for the development of epitope-based vaccines with wide population coverage. In view of extremely high MHC polymorphism (948 class I and 633 class II HLA alleles) the experimental solution of this task is presently impossible. In this study, we describe a bioinformatics strategy for classifying MHC molecules into supertypes using information drawn solely from three-dimensional protein structure. Two chemometric techniques–hierarchical clustering and principal component analysis–were used independently on a set of 783 HLA class I molecules to identify supertypes based on structural similarities and molecular interaction fields calculated for the peptide binding site. Eight supertypes were defined: A2, A3, A24, B7, B27, B44, C1, and C4. The two techniques gave 77% consensus, i.e., 605 HLA class I alleles were classified in the same supertype by both methods. The proposed strategy allowed “supertype fingerprints” to be identified. Thus, the A2 supertype fingerprint is Tyr9/Phe9, Arg97, and His114 or Tyr116; the A3-Tyr9/Phe9/Ser9, Ile97/Met97 and Glu114 or Asp116; the A24-Ser9 and Met97; the B7-Asn63 and Leu81; the B27-Glu63 and Leu81; for B44-Ala81; the C1-Ser77; and the C4-Asn77. action fields calculated for the peptide binding site. Eight supertypes were defined: A2, A3, A24, B7, B27, B44, C1, and C4. The two techniques gave 77% consensus, i.e., 605 HLA class I alleles were classified in the same supertype by both methods. The proposed strategy allowed “supertype fingerprints” to be identified. Thus, the A2 supertype fingerprint is Tyr9/Phe9, Arg97, and His114 or Tyr116; the A3-Tyr9/Phe9/Ser9, Ile97/Met97 and Glu114 or Asp116; the A24-Ser9 and Met97; the B7-Asn63 and Leu81; the B27-Glu63 and Leu81; for B44-Ala81; the C1-Ser77; and the C4-Asn77.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INFRAWEBS project [INFRAWEBS] considers usage of semantics for the complete lifecycle of Semantic Web processes, which represent complex interactions between Semantic Web Services. One of the main initiatives in the Semantic Web is WSMO framework, aiming at describing the various aspects related to Semantic Web Services in order to enable the automation of Web Service discovery, composition, interoperation and invocation. In the paper the conceptual architecture for BPEL-based INFRAWEBS editor is proposed that is intended to construct a part of WSMO descriptions of the Semantic Web Services. The semantic description of Web Services has to cover Data, Functional, Execution and QoS semantics. The representation of Functional semantics can be achieved by adding the service functionality to the process description. The architecture relies on a functional (operational) semantics of the Business Process Execution Language for Web Services (BPEL4WS) and uses abstract state machine (ASM) paradigm. This allows describing the dynamic properties of the process descriptions in terms of partially ordered transition rules and transforming them to WSMO framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next-generation networks are likely to be non-uniform in all their aspects, including number of lightpaths carried per link, number of wavelengths per link, number of fibres per link, asymmetry of the links, and traffic flows. Routing and wavelength allocation models generally assume that the optical network is uniform and that the number of wavelengths per link is a constant. In practice however, some nodes and links carry heavy traffic and additional wavelengths are needed in those links. We study a wavelength-routed optical network based on the UK JANET topology where traffic demands between nodes are assumed to be non-uniform. We investigate how network capacity can be increased by locating congested links and suggesting cost-effective upgrades. Different traffic demands patterns, hop distances, number of wavelengths per link, and routing algorithms are considered. Numerical results show that a 95% increase in network capacity is possible by overlaying fibre on just 5% of existing links. We conclude that non-uniform traffic allocation can be beneficial to localize traffic in nodes and links deep in the network core and provisioning of additional resources there can efficiently and cost-effectively increase network capacity. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the diversity pattern of nine Swiss stone pine (Pinus cembra L.) populations along the Carpathian range including the High Tatras, by using six chloroplast DNA microsatellites (cpSSR). Our aim was to detect genetically distinct regions by clustering of populations, and to tackle possible historical colonization routes. Our analysis referred to an investigated geographical range with the two most distant populations situated at about 500 air km. We found that the most diverse populations are situated at the two edges of the investigated part, in the Retezat Mts. (South Carpathians) and the High Tatras, and diversity decreases towards the populations of the Eastern Carpathians. Hierarchical clustering and NMDS revealed that the populations of the South Carpathians with the Tatras form a distinct cluster, significantly separated from those of the Eastern Carpathians. Moreover, based on the most variable chloroplast microsatellites, the four populations of the two range edges are not significantly different. Our results, supported also by palynological and late glacial macrofossil evidences, indicate refugial territories within the Retezat Mts. that conserved rich haplotype composition. From this refugial territory Pinus cembra might have colonized the Eastern Carpathians, and this was accompanied by a gradual decrease in population diversity. Populations of the High Tatras might have had the same role in the colonizing events of the Carpathians, as positive correlation was detected among populations lying from each other at a distance of 280 km, the maximum distance between neighbouring populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examines the extent to which blacks are segregated in the suburban community of Coconut Grove, Florida. Hypersegregation, or the general tendency for blacks and whites to live apart, was examined in terms of four distinct dimensions: evenness, exposure, clustering, and concentration. Together, these dimensions define the geographic traits of the target area. Alone these indices can not capture the multi-dimensional levels of segregation and, therefore, by themselves underestimate the severity of segregation and isolation in this community. This study takes a contemporary view of segregation in a Dade County community to see if segregation is the catalyst to the sometime cited violent response of blacks. This study yields results that support the information in the literature review and the thesis research questions sections namely, that the blacks within the Grove do respond violently to the negative effects that racial segregation causes. This thesis is unique in two ways. It examines segregation in a suburban environment rather than an urban inner city, and it presents a responsive analysis of the individuals studied, rather than relying only on demographic and statistical data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past decades, hospitality design has lost sight of its basic goals of providing the guest with safe, pleasant, convenient accommodations and providing the owner with a facility which can be operated efficiently and profitably over the life of the structure. The author offers the acronym GE- NIAL, Guest, Environment, Needs, Interiors, Accessibility, and Long-term, as a means of keeping owners, developers, managers, and designers aware of the desired goals of the facility throughout its design and development. The author believes that the use of this acronym will promulgate de- signs more attuned to guest and owner/operator needs, resulting in in- creased guest satisfaction and increased profitability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of smart TVs has reshaped the TV-consumer interaction by combining TVs with mobile-like applications and access to the Internet. However, consumers are still unable to seamlessly interact with the contents being streamed. An example of such limitation is TV shopping, in which a consumer makes a purchase of a product or item displayed in the current TV show. Currently, consumers can only stop the current show and attempt to find a similar item in the Web or an actual store. It would be more convenient if the consumer could interact with the TV to purchase interesting items. ^ Towards the realization of TV shopping, this dissertation proposes a scalable multimedia content processing framework. Two main challenges in TV shopping are addressed: the efficient detection of products in the content stream, and the retrieval of similar products given a consumer-selected product. The proposed framework consists of three components. The first component performs computational and temporal aware multimedia abstraction to select a reduced number of frames that summarize the important information in the video stream. By both reducing the number of frames and taking into account the computational cost of the subsequent detection phase, this component component allows the efficient detection of products in the stream. The second component realizes the detection phase. It executes scalable product detection using multi-cue optimization. Additional information cues are formulated into an optimization problem that allows the detection of complex products, i.e., those that do not have a rigid form and can appear in various poses. After the second component identifies products in the video stream, the consumer can select an interesting one for which similar ones must be located in a product database. To this end, the third component of the framework consists of an efficient, multi-dimensional, tree-based indexing method for multimedia databases. The proposed index mechanism serves as the backbone of the search. Moreover, it is able to efficiently bridge the semantic gap and perception subjectivity issues during the retrieval process to provide more relevant results.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Wireless Sensor Network (WSN) consists of distributed devices in an area in order to monitor physical variables such as temperature, pressure, vibration, motion and environmental conditions in places where wired networks would be difficult or impractical to implement, for example, industrial applications of difficult access, monitoring and control of oil wells on-shore or off-shore, monitoring of large areas of agricultural and animal farming, among others. To be viable, a WSN should have important requirements such as low cost, low latency, and especially low power consumption. However, to ensure these requirements, these networks suffer from limited resources, and eventually being used in hostile environments, leading to high failure rates, such as segmented routing, mes sage loss, reducing efficiency, and compromising the entire network, inclusive. This work aims to present the FTE-LEACH, a fault tolerant and energy efficient routing protocol that maintains efficiency in communication and dissemination of data.This protocol was developed based on the IEEE 802.15.4 standard and suitable for industrial networks with limited energy resources

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spread of wireless networks and growing proliferation of mobile devices require the development of mobility control mechanisms to support the different demands of traffic in different network conditions. A major obstacle to developing this kind of technology is the complexity involved in handling all the information about the large number of Moving Objects (MO), as well as the entire signaling overhead required to manage these procedures in the network. Despite several initiatives have been proposed by the scientific community to address this issue they have not proved to be effective since they depend on the particular request of the MO that is responsible for triggering the mobility process. Moreover, they are often only guided by wireless medium statistics, such as Received Signal Strength Indicator (RSSI) of the candidate Point of Attachment (PoA). Thus, this work seeks to develop, evaluate and validate a sophisticated communication infrastructure for Wireless Networking for Moving Objects (WiNeMO) systems by making use of the flexibility provided by the Software-Defined Networking (SDN) paradigm, where network functions are easily and efficiently deployed by integrating OpenFlow and IEEE 802.21 standards. For purposes of benchmarking, the analysis was conducted in the control and data planes aspects, which demonstrate that the proposal significantly outperforms typical IPbased SDN and QoS-enabled capabilities, by allowing the network to handle the multimedia traffic with optimal Quality of Service (QoS) transport and acceptable Quality of Experience (QoE) over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.