943 resultados para Paper-based
Resumo:
Scheduling of constrained deadline sporadic task systems on multiprocessor platforms is an area which has received much attention in the recent past. It is widely believed that finding an optimal scheduler is hard, and therefore most studies have focused on developing algorithms with good processor utilization bounds. These algorithms can be broadly classified into two categories: partitioned scheduling in which tasks are statically assigned to individual processors, and global scheduling in which each task is allowed to execute on any processor in the platform. In this paper we consider a third, more general, approach called cluster-based scheduling. In this approach each task is statically assigned to a processor cluster, tasks in each cluster are globally scheduled among themselves, and clusters in turn are scheduled on the multiprocessor platform. We develop techniques to support such cluster-based scheduling algorithms, and also consider properties that minimize total processor utilization of individual clusters. In the last part of this paper, we develop new virtual cluster-based scheduling algorithms. For implicit deadline sporadic task systems, we develop an optimal scheduling algorithm that is neither Pfair nor ERfair. We also show that the processor utilization bound of us-edf{m/(2m−1)} can be improved by using virtual clustering. Since neither partitioned nor global strategies dominate over the other, cluster-based scheduling is a natural direction for research towards achieving improved processor utilization bounds.
Resumo:
The problem of providing a hybrid wired/wireless communications for factory automation systems is still an open issue, notwithstanding the fact that already there are some solutions. This paper describes the role of simulation tools on the validation and performance analysis of two wireless extensions for the PROFIBUS protocol. In one of them, the Intermediate Systems, which connect wired and wireless network segments, operate as repeaters. In the other one the Intermediate Systems operate as bridge. We also describe how the analytical analysis proposed for these kinds of networks can be used for the setting of some network parameters and for the guaranteeing real-time behaviour of the system. Additionally, we also compare the bridge-based solution simulation results with the analytical results.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.
Resumo:
In this paper we propose a framework for the support of mobile application with Quality of Service (QoS) requirements, such as voice or video, capable of supporting distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.
Resumo:
Temporal isolation is an increasingly relevant con- cern in particular for ARINC-351 and virtualisation- based systems. Traditional approaches like the rate- based scheduling framework RBED do not take into account the impact of preemptions in terms of loss of working set in the acceleration hardware (e.g. caches). While some improvements have been suggested in the literature, they are overly heavy in the presence of small high-priority tasks such as interrupt service routines. Within this paper we propose an approach enabling adaptive assessment of this preemption delay in a tem- poral isolation framework with special consideration of capabilities and limitations of the approach.
Resumo:
Developing an efficient server-based real-time scheduling solution that supports dynamic task-level parallelism is now relevant to even the desktop and embedded domains and no longer only to the high performance computing market niche. This paper proposes a novel approach that combines the constantbandwidth server abstraction with a work-stealing load balancing scheme which, while ensuring isolation among tasks, enables a task to be executed on more than one processor at a given time instant.
Resumo:
The availability of small inexpensive sensor elements enables the employment of large wired or wireless sensor networks for feeding control systems. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loop. This paper presents a solution to this problem by representing sensor measurements with an approximate representation-an interpolation of sensor measurements as a function of space coordinates. A priority-based medium access control (MAC) protocol is used to select the sensor messages with high information content. Thus, the information from a large number of sensor measurements is conveyed within a few messages. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.
Resumo:
This paper proposes a dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) residual capacity allocated but unused when jobs complete in less than their budgeted execution time; (ii) stealing capacity from inactive non-isolated servers used to schedule best-effort jobs. The effectiveness of the proposed approach in reducing the mean tardiness of periodic jobs is demonstrated through extensive simulations. The achieved results become even more significant when tasks’ computation times have a large variance.
Resumo:
The paper will present the central discourse of the knowledge-based society. Already in the 1960s the debate of the industrial society already raised the question whether there can be considered a paradigm shift towards a knowledge-based society. Some prominent authors already foreseen ‘knowledge’ as the main indicator in order to displace ‘labour’ and ‘capital’ as the main driving forces of the capitalistic development. Today on the political level and also in many scientific disciplines the assumption that we are already living in a knowledge-based society seems obvious. Although we still do not have a theory of the knowledge-based society and there still exist a methodological gap about the empirical indicators, the vision of a knowledge-based society determines at least the perception of the Western societies. In a first step the author will pinpoint the assumptions about the knowledge-based society on three levels: on the societal, on the organisational and on the individual level. These assumptions are relied on the following topics: a) The role of the information and communication technologies; b) The dynamic development of globalisation as an ‘evolutionary’ process; c) The increasing importance of knowledge management within organisations; d) The changing role of the state within the economic processes. Not only the differentiation between the levels but also the revision of the assumptions of a knowledge-based society will show that the ‘topics raised in the debates’ cannot be considered as the results of a profound societal paradigm shift. However what seems very impressive is the normative and virtual shift towards a concept of modernity, which strongly focuses on the role of technology as a driving force as well as on the global economic markets, which has to be accepted. Therefore – according to the official debate - the successful adaptation of these processes seems the only way to meet the knowledge-based society. Analysing the societal changes on the three levels, the label ‘knowledge-based society’ can be seen critically. Therefore the main question of Theodor W. Adorno during the 16th Congress of Sociology in 1968 did not loose its actuality. Facing the societal changes he asked whether we are still living in the industrial society or already in a post-industrial state. Thinking about the knowledge-based society according to these two options, this exercise would enrich the whole debate in terms of social inequality, political, economic exclusion processes and at least the power relationship between social groups.
Resumo:
The advent of Wireless Sensor Network (WSN) technologies is paving the way for a panoply of new ubiquitous computing applications, some of them with critical requirements. In the ART-WiSe framework, we are designing a two-tiered communication architecture for supporting real-time and reliable communications in WSNs. Within this context, we have been developing a test-bed application, for testing, validating and demonstrating our theoretical findings - a search&rescue/pursuit-evasion application. Basically, a WSN deployment is used to detect, localize and track a target robot and a station controls a rescuer/pursuer robot until it gets close enough to the target robot. This paper describes how this application was engineered, particularly focusing on the implementation of the localization mechanism.
Resumo:
With the advent of wearable sensing and mobile technologies, biosignals have seen an increasingly growing number of application areas, leading to the collection of large volumes of data. One of the difficulties in dealing with these data sets, and in the development of automated machine learning systems which use them as input, is the lack of reliable ground truth information. In this paper we present a new web-based platform for visualization, retrieval and annotation of biosignals by non-technical users, aimed at improving the process of ground truth collection for biomedical applications. Moreover, a novel extendable and scalable data representation model and persistency framework is presented. The results of the experimental evaluation with possible users has further confirmed the potential of the presented framework.
Resumo:
In this paper we demonstrate an add/drop filter based on SiC technology. Tailoring of the channel bandwidth and wavelength is experimentally demonstrated. The concept is extended to implement a 1 by 4 wavelength division multiplexer with channel separation in the visible range. The device consists of a p-i'(a-SiC:H)-n/p-i(a-Si: H)-n heterostructure. Several monochromatic pulsed lights, separately or in a polychromatic mixture illuminated the device. Independent tuning of each channel is performed by steady state violet bias superimposed either from the front and back sides. Results show that, front background enhances the light-to-dark sensitivity of the long and medium wavelength channels and quench strongly the others. Back violet background has the opposite behaviour. This nonlinearity provides the possibility for selective removal or addition of wavelengths. An optoelectronic model is presented and explains the light filtering properties of the add/drop filter, under different optical bias conditions.
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.
Resumo:
In the last years, several solutions have been proposed to extend PROFIBUS in order to support wired and wireless network stations in the same network. In this paper we compare two of those solutions, one in which the interconnection between wired and wireless stations is made by repeaters and another in which the interconnection is made by bridges. The comparison is both qualitative and numerical, based on simulation models of both architectures.