949 resultados para Time constraints
Resumo:
Typically common embedded systems are designed with high resource constraints. Static designs are often chosen to address very specific use cases. On contrast, a dynamic design must be used if the system must supply a real-time service where the input may contain factors of indeterminism. Thus, adding new functionality on these systems is often accomplished by higher development time, tests and costs, since new functionality push the system complexity and dynamics to a higher level. Usually, these systems have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to the external applications, usually using a run-time monitoring mechanism. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.
Resumo:
This paper appears in International Journal of Projectics. Vol 4(1), pp. 39-49
Resumo:
This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Energy resource scheduling is becoming increasingly important, such as the use of more distributed generators and electric vehicles connected to the distribution network. This paper proposes a methodology to be used by Virtual Power Players (VPPs), regarding the energy resource scheduling in smart grids and considering day-ahead, hour-ahead and realtime time horizons. This method considers that energy resources are managed by a VPP which establishes contracts with their owners. The full AC power flow calculation included in the model takes into account network constraints. In this paper, distribution function errors are used to simulate variations between time horizons, and to measure the performance of the proposed methodology. A 33-bus distribution network with large number of distributed resources is used.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP 2015). 4 to 6, Mar, 2015. Turku, Finland.
Resumo:
Dissertação para obtenção do Grau de Mestre em Lógica Computacional
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
This paper investigates dynamic completeness of financial markets in which the underlying risk process is a multi-dimensional Brownian motion and the risky securities dividends geometric Brownian motions. A sufficient condition, that the instantaneous dispersion matrix of the relative dividends is non-degenerate, was established recently in the literature for single-commodity, pure-exchange economies with many heterogenous agents, under the assumption that the intermediate flows of all dividends, utilities, and endowments are analytic functions. For the current setting, a different mathematical argument in which analyticity is not needed shows that a slightly weaker condition suffices for general pricing kernels. That is, dynamic completeness obtains irrespectively of preferences, endowments, and other structural elements (such as whether or not the budget constraints include only pure exchange, whether or not the time horizon is finite with lump-sum dividends available on the terminal date, etc.)
Resumo:
Eusociality is taxonomically rare, yet associated with great ecological success. Surprisingly, studies of environmental conditions favouring eusociality are often contradictory. Harsh conditions associated with increasing altitude and latitude seem to favour increased sociality in bumblebees and ants, but the reverse pattern is found in halictid bees and polistine wasps. Here, we compare the life histories and distributions of populations of 176 species of Hymenoptera from the Swiss Alps. We show that differences in altitudinal distributions and development times among social forms can explain these contrasting patterns: highly social taxa develop more quickly than intermediate social taxa, and are thus able to complete the reproductive cycle in shorter seasons at higher elevations. This dual impact of altitude and development time on sociality illustrates that ecological constraints can elicit dynamic shifts in behaviour, and helps explain the complex distribution of sociality across ecological gradients.
Resumo:
BACKGROUND: The comparison of complete genomes has revealed surprisingly large numbers of conserved non-protein-coding (CNC) DNA regions. However, the biological function of CNC remains elusive. CNC differ in two aspects from conserved protein-coding regions. They are not conserved across phylum boundaries, and they do not contain readily detectable sub-domains. Here we characterize the persistence length and time of CNC and conserved protein-coding regions in the vertebrate and insect lineages. RESULTS: The persistence length is the length of a genome region over which a certain level of sequence identity is consistently maintained. The persistence time is the evolutionary period during which a conserved region evolves under the same selective constraints.Our main findings are: (i) Insect genomes contain 1.60 times less conserved information than vertebrates; (ii) Vertebrate CNC have a higher persistence length than conserved coding regions or insect CNC; (iii) CNC have shorter persistence times as compared to conserved coding regions in both lineages. CONCLUSION: Higher persistence length of vertebrate CNC indicates that the conserved information in vertebrates and insects is organized in functional elements of different lengths. These findings might be related to the higher morphological complexity of vertebrates and give clues about the structure of active CNC elements.Shorter persistence time might explain the previously puzzling observations of highly conserved CNC within each phylum, and of a lack of conservation between phyla. It suggests that CNC divergence might be a key factor in vertebrate evolution. Further evolutionary studies will help to relate individual CNC to specific developmental processes.
Resumo:
Background With the emergence of influenza H1N1v the world is facing its first 21st century global pandemic. Severe Acute Respiratory Syndrome (SARS) and avian influenza H5N1 prompted development of pandemic preparedness plans. National systems of public health law are essential for public health stewardship and for the implementation of public health policy[1]. International coherence will contribute to effective regional and global responses. However little research has been undertaken on how law works as a tool for disease control in Europe. With co-funding from the European Union, we investigated the extent to which laws across Europe support or constrain pandemic preparedness planning, and whether national differences are likely to constrain control efforts. Methods We undertook a survey of national public health laws across 32 European states using a questionnaire designed around a disease scenario based on pandemic influenza. Questionnaire results were reviewed in workshops, analysing how differences between national laws might support or hinder regional responses to pandemic influenza. Respondents examined the impact of national laws on the movements of information, goods, services and people across borders in a time of pandemic, the capacity for surveillance, case detection, case management and community control, the deployment of strategies of prevention, containment, mitigation and recovery and the identification of commonalities and disconnects across states. Results Results of this study show differences across Europe in the extent to which national pandemic policy and pandemic plans have been integrated with public health laws. We found significant differences in legislation and in the legitimacy of strategic plans. States differ in the range and the nature of intervention measures authorized by law, the extent to which borders could be closed to movement of persons and goods during a pandemic, and access to healthcare of non-resident persons. Some states propose use of emergency powers that might potentially override human rights protections while other states propose to limit interventions to those authorized by public health laws. Conclusion These differences could create problems for European strategies if an evolving influenza pandemic results in more serious public health challenges or, indeed, if a novel disease other than influenza emerges with pandemic potential. There is insufficient understanding across Europe of the role and importance of law in pandemic planning. States need to build capacity in public health law to support disease prevention and control policies. Our research suggests that states would welcome further guidance from the EU on management of a pandemic, and guidance to assist in greater commonality of legal approaches across states.
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts