26 resultados para Cost Over run
em Universidad Politécnica de Madrid
Resumo:
In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.
Resumo:
Aircraft Operators Companies (AOCs) are always willing to keep the cost of a flight as low as possible. These costs could be modelled using a function of the fuel consumption, time of flight and fixed cost (over flight cost, maintenance, etc.). These are strongly dependant on the atmospheric conditions, the presence of winds and the aircraft performance. For this reason, much research effort is being put in the development of numerical and graphical techniques for defining the optimal trajectory. This paper presents a different approach to accommodate AOCs preferences, adding value to their activities, through the development of a tool, called aircraft trajectory simulator. This tool is able to simulate the actual flight of an aircraft with the constraints imposed. The simulator is based on a point mass model of the aircraft. The aim of this paper is to evaluate 3DoF aircraft model errors with BADA data through real data from Flight Data Recorder FDR. Therefore, to validate the proposed simulation tool a comparative analysis of the state variables vector is made between an actual flight and the same flight using the simulator. Finally, an example of a cruise phase is presented, where a conventional levelled flight is compared with a continuous climb flight. The comparison results show the potential benefits of following user-preferred routes for commercial flights.
Resumo:
The crop diseases sometimes are related to the irradiance that the crop receives. When an experiment requires the measurement of the irradiance, usually it results in an expensive data acquisition system. If it is necessary to check many test points, the use of traditional sensors will increase the cost of the experiment. By using low cost sensors based in the photovoltaic effect, it is possible to perform a precise test of irradiance with a reduced price. This work presents an experiment performed in Ademuz (Valencia, Spain) during September of 2011 to check the validity of low cost sensors based on solar cells.
Resumo:
This paper tackles the optimization of applications in multi-provider hybrid cloud scenarios from an economic point of view. In these scenarios the great majority of solutions offer the automatic allocation of resources on different cloud providers based on their current prices. However our approach is intended to introduce a novel solution by making maximum use of divide and rule. This paper describes a methodology to create cost aware cloud applications that can be broken down into the three most important components in cloud infrastructures: computation, network and storage. A real videoconference system has been modified in order to evaluate this idea with both theoretical and empirical experiments. This system has become a widely used tool in several national and European projects for e-learning and collaboration purposes.
Resumo:
Next Generation Networks (NGN) provide Telecommunications operators with the possibility to share their resources and infrastructure, facilitate the interoperability with other networks, and simplify and unify the management, operation and maintenance of service offerings, thus enabling the fast and cost-effective creation of new personal, broadband ubiquitous services. Unfortunately, service creation over NGN is far from the success of service creation in the Web, especially when it comes to Web 2.0. This paper presents a novel approach to service creation and delivery, with a platform that opens to non-technically skilled users the possibility to create, manage and share their own convergent (NGN-based and Web-based) services. To this end, the business approach to user-generated services is analyzed and the technological bases supporting the proposal are explained.
Resumo:
High efficiency solar cells working under ultra-high concentrations (>;1000X) have been shown to be a promising solution to decrease the cost of PV electricity, increase the efficiency and circumvent the material availability restrictions for massive PV penetration. A detailed analysis of the limitations of our current triple junction solar cell (36.2% at 700X), in the quest to maximize efficiency at 1000X, shows that the main improvements to tackle are: a) implementation of a high band gap tunnel junction; b) increase the band gap of the top cell; c) fine current matching tune; d) enhancement of the front contact process. This constitutes our roadmap to reach an efficiency over 41%
Resumo:
This study investigated the changes in cardiorespiratory response and running performance of 9 male ?Talent Identification? (TID) and 6 male Senior Elite (SE) Spanish National Squad triathletes during a specific cycle-run test. The TID and SE triathletes (initial age 15.2±0.7 vs. 23.8±5.6 years, p=0.03; tests through the competitive period and the preparatory period, respectively, of two consecutive seasons: Test 1 was an incremental cycle test to determine the ventilatory threshold (Thvent); Test 2 (C-R) was 30 min constant load cycling at the Thvent power output followed by a 3-km time trial run; and Test 3 (R) was an isolated 3-km time trial control run, in randomized counterbalanced order. In both seasons the time required to complete the C-R 3-km run was greater than for R in TID (11:09±00:24 vs. 10:45±00:16 min:ss, pmenor que 0.01; and 10:24±00:22 vs. 10:04±00:14, p=0.006, for season 2005/06 and 2006/07, respectively) and SE (10:15±00:19 vs. 09:45±00:30, pmenor que 0.001 and 09:51±00:26 vs. 09:46±00:06, p= 0.02 for season 2005/06 and 2006/07, respectively). Compared to the first season, completion of the time trial run was faster in the second season (6.6%, pmenor que 0.01 and 6.4%, pmenor que 0.01, for C-R and R test, respectively) only in TID. Changes in post-cycling run performance were accompanied by changes in pacing strategy but only slight or non-significant changes in the cardiorespiratory response. Thus, the negative effect of cycling on performance may persist, independently of the period, over two consecutive seasons in TID and SE triathletes; however A improvements over time suggests that monitoring running pacing strategy after cycling may be a useful tool to control performance and training adaptations in TID. O2max 77.0±5.6 vs. 77.8±3.6 mL·kg-1·min-1, NS) underwent three TE D EP C C
Resumo:
In recent decays university class small satellites are creating many opportunities for space research and professional trainings while at the same time responding to constrained budgets. In this work the main focus is on developing a simple and rapid structural sizing tool considering the main objectives of a low cost university class microsatellite project. In satellite projects, structure subsystem is one of the influential subsystems as a driver of the cost and acceptance of the final design. At the first steps of such projects there is no confirmed data regarding the launch vehicle or even in some cases there is no data for the satellite payload. Due to these facts, developing simple sizing tools at conceptual design phase for obtaining an over view of the effect of different variables is useful before entering complex calculations in detailed design phases. In this study, after developing a simple analytical model of satellite structure subsystem, a design space is evaluated with practical boundaries considering mass and dimensions constraints of such projects. The results are useful to give initial insight to establish the system level structural sizing
Resumo:
In overhead conductor rail lines, aluminium beams are usually mounted with support spacing between 8 and 12 meters, to limit the maximum vertical deflection in the center of the span. This small support spacing limits the use of overhead conductor rail to tunnels, therefore it has been used almost exclusively in metropolitan networks, with operation speeds below 110 km/h. Nevertheless, due to the lower cost of maintenance required for this electrification system, some railway administrations are beginning to install it in some tunnels on long-distance lines, requesting higher operation speeds [1]. Some examples are the Barcelona and Madrid suburban networks (Spain), and recent lines in Turkey and Malaysia. In order to adapt the design of the overhead conductor for higher speeds (V > 160 km/h), particular attention must be paid to the geometry of the conductor rail in critical zones as overlaps, crossings and, especially, transitions between conductor rail and conventional catenary, since the use of overhead conductor rail is limited to tunnels, as already mentioned. This paper describes simulation techniques developed in order to take into account these critical zones. Furthermore, some specific simulations results are presented that have been used to analyze and optimizes the geometry of this special zones to get a better current collection quality, in a real suburban network. This paper presents the work undertaken by the Railways Technology Research Centre (CITEF), having over 10 years of experience in railways research [1-4].
Resumo:
Real time Tritium concentrations in air coming from an ITER-like reactor as source were coupled the European Centre Medium Range Weather Forecast (ECMWF) numerical model with the lagrangian atmospheric dispersion model FLEXPART. This tool ECMWF/FLEXPART was analyzed in normal operating conditions in the Western Mediterranean Basin during 45 days at summer 2010. From comparison with NORMTRI plumes over Western Mediterranean Basin the real time results have demonstrated an overestimation of the corresponding climatologically sequence Tritium concentrations in air outputs, at several distances from the reactor. For these purpose two clouds development patterns were established. The first one was following a cyclonic circulation over the Mediterranean Sea and the second one was based in the cloud delivered over the Interior of the Iberian Peninsula by another stabilized circulation corresponding to a High. One of the important remaining activities defined then, was the tool qualification. The aim of this paper is to present the ECMWF/FLEXPART products confronted with Tritium concentration in air data. For this purpose a database to develop and validate ECMWF/FLEXPART tritium in both assessments has been selected from a NORMTRI run. Similarities and differences, underestimation and overestimation with NORMTRI will allowfor refinement in some features of ECMWF/FLEXPART
Resumo:
The run-of-river hydro power plant usually have low or nil water storage capacity, and therefore an adequate control strategy is required to keep the water level constant in pond. This paper presents a novel technique based on TSK fuzzy controller to maintain the pond head constant. The performance is investigated over a wide range of hill curve of hydro turbine. The results are compared with PI controller as discussed in [1].
Resumo:
Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.
Resumo:
The main objective of this work is the design and implementation of the digital control stage of a 280W AC/DC industrial power supply in a single low-cost microcontroller to replace the analog control stage. The switch-mode power supply (SMPS) consists of a PFC boost converter with fixed frequency operation and a variable frequency LLC series resonant DC/DC converter. Input voltage range is 85VRMS-550VRMS and the output voltage range is 24V-28V. A digital controller is especially suitable for this kind of SMPS to implement its multiple functionalities and to keep the efficiency and the performance high over the wide range of input voltages. Additional advantages of the digital control are reliability and size. The optimized design and implementation of the digital control stage it is presented. Experimental results show the stable operation of the controlled system and an estimation of the cost reduction achieved with the digital control stage.
Resumo:
Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different types of resource-related properties that affect performance, preserving the precedence of cost centers in the cali graph. It includes an automatic method for detecting procedures that are performance bottlenecks. The profiling tool has been integrated in a previously developed run-time checking framework to allow verification of certain properties when they cannot be verified statically. The approach allows checking global computational properties which require complex instrumentation tracking information about previous execution states, such as, e.g., that the execution time accumulated by a given procedure is not greater than a given bound. We have built a prototype implementation, integrated it in the Ciao/CiaoPP system and successfully applied it to performance improvement, automatic optimization (e.g., resource-aware specialization of programs), run-time checking, and debugging of global computational properties (e.g., resource usage) in Prolog programs.
Resumo:
Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different types of resource-related properties that affect performance, preserving the precedence of cost centers in the call graph. It includes an automatic method for detecting procedures that are performance bottlenecks. The profiling tool has been integrated in a previously developed run-time checking framework to allow verification of certain properties when they cannot be verified statically. The approach allows checking global computational properties which require complex instrumentation tracking information about previous execution states, such as, e.g., that the execution time accumulated by a given procedure is not greater than a given bound. We have built a prototype implementation, integrated it in the Ciao/CiaoPP system and successfully applied it to performance improvement, automatic optimization (e.g., resource-aware specialization of programs), run-time checking, and debugging of global computational properties (e.g., resource usage) in Prolog programs.