978 resultados para Biology, Microbiology|Biology, Bioinformatics|Biology, Virology|Computer Science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Handoff processes, the events where mobile nodes select the best access point available to transfer data, have been well studied in cellular and WiFi networks. However, wireless sensor networks (WSN) pose a new set of challenges due to their simple low-power radio transceivers and constrained resources. This paper proposes smart-HOP, a handoff mechanism tailored for mobile WSN applications. This work provides two important contributions. First, it demonstrates the intrinsic relationship between handoffs and the transitional region. The evaluation shows that handoffs perform the best when operating in the transitional region, as opposed to operating in the more reliable connected region. Second, the results reveal that a proper fine tuning of the parameters, in the transitional region, can reduce handoff delays by two orders of magnitude, from seconds to tens of milliseconds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant research efforts are being devoted to Body Area Networks (BAN) due to their potential for revolutionizing healthcare practices. Energy-efficiency and communication reliability are critically important for these networks. In an experimental study with three different mote platforms, we show that changes in human body shadowing as well as those in the relative distance and orientation of nodes caused by the common human body movements can result in significant fluctuations in the received signal strength within a BAN. Furthermore, regular movements, such as walking, typically manifest in approximately periodic variations in signal strength. We present an algorithm that predicts the signal strength peaks and evaluate it on real-world data. We present the design of an opportunistic MAC protocol, named BANMAC, that takes advantage of the periodic fluctuations of the signal strength to achieve high reliability even with low transmission power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a spatial econometrics analysis for the number of road accidents with victims in the smallest administrative divisions of Lisbon, considering as a baseline a log-Poisson model for environmental factors. Spatial correlation on data is investigated for data alone and for the residuals of the baseline model without and with spatial-autocorrelated and spatial-lagged terms. In all the cases no spatial autocorrelation was detected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radio Link Quality Estimation (LQE) is a fundamental building block for Wireless Sensor Networks, namely for a reliable deployment, resource management and routing. Existing LQEs (e.g. PRR, ETX, Fourbit, and LQI ) are based on a single link property, thus leading to inaccurate estimation. In this paper, we propose F-LQE, that estimates link quality on the basis of four link quality properties: packet delivery, asymmetry, stability, and channel quality. Each of these properties is defined in linguistic terms, the natural language of Fuzzy Logic. The overall quality of the link is specified as a fuzzy rule whose evaluation returns the membership of the link in the fuzzy subset of good links. Values of the membership function are smoothed using EWMA filter to improve stability. An extensive experimental analysis shows that F-LQE outperforms existing estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the problem of scheduling sporadically-arriving tasks with implicit deadlines using Earliest-Deadline-First (EDF) on a single processor. The system may undergo changes in its operational modes and therefore the characteristics of the task set may change at run-time. We consider a well-established previously published mode-change protocol and we show that if every mode utilizes at most 50% of the processing capacity then all deadlines are met. We also show that there exists a task set that misses a deadline although the utilization exceeds 50% by just an arbitrarily small amount. Finally, we present, for a relevant special case, an exact schedulability test for EDF with mode change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the problem of scheduling real-time tasks on a multiprocessor with the goal of meeting deadlines. Tasks arrive sporadically and have implicit deadlines, that is, the deadline of a task is equal to its minimum inter-arrival time. Consider this problem to be solved with global static-priority scheduling. We present a priority-assignment scheme with the property that if at most 38% of the processing capacity is requested then all deadlines are met.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional Real-Time Operating Systems (RTOS) are not designed to accommodate application specific requirements. They address a general case and the application must co-exist with any limitations imposed by such design. For modern real-time applications this limits the quality of services offered to the end-user. Research in this field has shown that it is possible to develop dynamic systems where adaptation is the key for success. However, adaptation requires full knowledge of the system state. To overcome this we propose a framework to gather data, and interact with the operating system, extending the traditional POSIX trace model with a partial reflective model. Such combination still preserves the trace mechanism semantics while creating a powerful platform to develop new dynamic systems, with little impact in the system and avoiding complex changes in the kernel source code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In global scientific experiments with collaborative scenarios involving multinational teams there are big challenges related to data access, namely data movements are precluded to other regions or Clouds due to the constraints on latency costs, data privacy and data ownership. Furthermore, each site is processing local data sets using specialized algorithms and producing intermediate results that are helpful as inputs to applications running on remote sites. This paper shows how to model such collaborative scenarios as a scientific workflow implemented with AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic), a decentralized framework offering a feasible solution to run the workflow activities on distributed data centers in different regions without the need of large data movements. The AWARD workflow activities are independently monitored and dynamically reconfigured and steering by different users, namely by hot-swapping the algorithms to enhance the computation results or by changing the workflow structure to support feedback dependencies where an activity receives feedback output from a successor activity. A real implementation of one practical scenario and its execution on multiple data centers of the Amazon Cloud is presented including experimental results with steering by multiple users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With advancement in computer science and information technology, computing systems are becoming increasingly more complex with an increasing number of heterogeneous components. They are thus becoming more difficult to monitor, manage, and maintain. This process has been well known as labor intensive and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an innovative framework for scheduling system management by combining Autonomic Computing (AC) paradigm, Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation presented to obtain a Masters degree in Computer Science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Intelligence (CI) includes four main areas: Evolutionary Computation (genetic algorithms and genetic programming), Swarm Intelligence, Fuzzy Systems and Neural Networks. This article shows how CI techniques overpass the strict limits of Artificial Intelligence field and can help solving real problems from distinct engineering areas: Mechanical, Computer Science and Electrical Engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science