864 resultados para Sharable Content Object Resource Model (SCORM)
Resumo:
Future distribution systems will have to deal with an intensive penetration of distributed energy resources ensuring reliable and secure operation according to the smart grid paradigm. SCADA (Supervisory Control and Data Acquisition) is an essential infrastructure for this evolution. This paper proposes a new conceptual design of an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). This SCADA model is used to support the energy resource management undertaken by a distribution network operator (DNO). Resource management considers all the involved costs, power flows, and electricity prices, allowing the use of network reconfiguration and load curtailment. Locational Marginal Prices (LMP) are evaluated and used in specific situations to apply Demand Response (DR) programs on a global or a local basis. The paper includes a case study using a 114 bus distribution network and load demand based on real data.
Resumo:
This paper proposes a novel business model to support media content personalisation: an agent-based business-to-business (B2B) brokerage platform for media content producer and distributor businesses. Distributors aim to provide viewers with a personalised content experience and producers wish to en-sure that their media objects are watched by as many targeted viewers as possible. In this scenario viewers and media objects (main programmes and candidate objects for insertion) have profiles and, in the case of main programme objects, are annotated with placeholders representing personalisation opportunities, i.e., locations for insertion of personalised media objects. The MultiMedia Brokerage (MMB) platform is a multiagent multilayered brokerage composed by agents that act as sellers and buyers of viewer stream timeslots and/or media objects on behalf of the registered businesses. These agents engage in negotiations to select the media objects that best match the current programme and viewer profiles.
Resumo:
Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.
Resumo:
Dynamical systems theory in this work is used as a theoretical language and tool to design a distributed control architecture for a team of three robots that must transport a large object and simultaneously avoid collisions with either static or dynamic obstacles. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constraints are modeled as attractors (i.e. asymptotic stable states) of the behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotical stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
In this paper dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for a team of two robots that must transport a large object and simultaneously avoid collisions with obstacles (either static or dynamic). This work extends the previous work with two robots (see [1] and [5]). However here we demonstrate that it’s possible to simplify the architecture presented in [1] and [5] and reach an equally stable global behavior. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constrains are modeled as attractors (i.e. asymptotic stable states) of a behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotic stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
Dynamical systems theory is used here as a theoretical language and tool to design a distributed control architecture for a team of two mobile robots that must transport a long object and simultaneously avoid obstacles. In this approach the level of modeling is at the level of behaviors. A “dynamics” of behavior is defined over a state space of behavioral variables (heading direction and path velocity). The environment is also modeled in these terms by representing task constraints as attractors (i.e. asymptotically stable states) or reppelers (i.e. unstable states) of behavioral dynamics. For each robot attractors and repellers are combined into a vector field that governs the behavior. The resulting dynamical systems that generate the behavior of the robots may be nonlinear. By design the systems are tuned so that the behavioral variables are always very close to one attractor. Thus the behavior of each robot is controled by a time series of asymptotically stable states. Computer simulations support the validity of our dynamic model architectures.
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
A construction project is a group of discernible tasks or activities that are conduct-ed in a coordinated effort to accomplish one or more objectives. Construction projects re-quire varying levels of cost, time and other resources. To plan and schedule a construction project, activities must be defined sufficiently. The level of detail determines the number of activities contained within the project plan and schedule. So, finding feasible schedules which efficiently use scarce resources is a challenging task within project management. In this context, the well-known Resource Constrained Project Scheduling Problem (RCPSP) has been studied during the last decades. In the RCPSP the activities of a project have to be scheduled such that the makespan of the project is minimized. So, the technological precedence constraints have to be observed as well as limitations of the renewable resources required to accomplish the activities. Once started, an activity may not be interrupted. This problem has been extended to a more realistic model, the multi-mode resource con-strained project scheduling problem (MRCPSP), where each activity can be performed in one out of several modes. Each mode of an activity represents an alternative way of combining different levels of resource requirements with a related duration. Each renewable resource has a limited availability for the entire project such as manpower and machines. This paper presents a hybrid genetic algorithm for the multi-mode resource-constrained pro-ject scheduling problem, in which multiple execution modes are available for each of the ac-tivities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme. It is evaluated the quality of the schedules and presents detailed comparative computational re-sults for the MRCPSP, which reveal that this approach is a competitive algorithm.
Resumo:
ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2) for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
We consider a trade policy model, where the costs of the home firm are private information but can be signaled through the output levels of the firm to a foreign competitor and a home policymaker. We study the influences of the non-homogeneity of the goods and of the uncertainty on the production costs of the home firm in the signalling strategies by the home firm. We show that some results obtained for homogeneous goods are not robust under non-homogeneity.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
In this paper, we study an international market model in which the home government imposes a tariff on the imported goods. The model has two stages. In the first stage, the home government chooses an import tariff to maximize a function that cares about the home firm’s profit and the total revenue. Then, the firms engage in a Cournot or in a Stackelberg competition. We compare the results obtained in the three different ways of moving on the decision make of the firms.