13 resultados para out-of-school time
em Universidad Politécnica de Madrid
Resumo:
Purpose Sustainable mobility urban policies intend reducing car use and increasing walking, cycling and public transport. However, this transfer from private car to these more sustainable modes is only a real alternative where distances are small and the public transport supply competitive enough. This paper proposes a methodology to calculate the number of trips that can be transferred from private car to other modes in city centres. Method The method starts analyzing which kind of trips cannot change its mode (purposes, conditions, safety , etc.), and then setting a process to determine under which conditions trips made by car between given O-D pairs can be transferable. Then, the application of demand models allow to determine which trips fulfil the transferability conditions. The process test the possibility of transfer in a sequential way: firs to walking, then cycling and finally to public transport. Results The methodology is tested through its application to the city of Madrid (Spain), with the result of only some 18% of the trips currently made by car could be made by other modes, under the same conditions of trip time, and without affecting their characteristics. Out of these trips, 75% could be made by public transport, 15% cycling and 10% on foot. The possible mode to be transferred depends on the location: city centre areas are more favourable for walking and cycling while city skirts could attract more PT trips. Conclusions The proposed method has demonstrated its validity to determine the potential of transferring trips out of cars to more sustainable modes. Al the same time it is clear that, even in areas with favourable conditions for walking, cycling and PT trips, the potential of transfer is limited because cars fulfil more properly special requirements of some trips and tours.
Resumo:
One major problem of concurrent multi-path transfer (CMT) scheme in multi-homed mobile networks is that the utilization of different paths with diverse delays may cause packet reordering among packets of the same ?ow. In the case of TCP-like, the reordering exacerbates the problem by bringing more timeouts and unnecessary retransmissions, which eventually degrades the throughput of connections considerably. To address this issue, we ?rst propose an Out-of-order Scheduling for In-order Arriving (OSIA), which exploits the sending time discrepancy to preserve the in-order packet arrival. Then, we formulate the optimal traf?c scheduling as a constrained optimization problem and derive its closedform solution by our proposed progressive water-?lling solution. We also present an implementation to enforce the optimal scheduling scheme using cascaded leaky buckets with multiple faucets, which provides simple guidelines on maximizing the utilization of aggregate bandwidth while decreasing the probability of triggering 3 dupACKs. Compared with previous work, the proposed scheme has lower computation complexity and can also provide the possibility for dynamic network adaptability and ?ner-grain load balancing. Simulation results show that our scheme signi?cantly alleviates reordering and enhances transmission performance.
Resumo:
Prestressed structures are susceptible to relaxation losses which are of significant importance in structural design. After being manufactured, prestressing wires are coiled to make their storage and transportation easier. The possible deleterious effects of this operation on the stress relaxation behavior of prestressing steel wires are usually neglected, though it has been noticed by manufacturers and contractors that when relaxation tests are carried out after a long-time storage, on occasions relaxation losses are higher than those measured a short time after manufacturing. The influence of coiling on the relaxation losses is checked by means of experimental work and confirmed with a simple analytical model. The results show that some factors like initial residual stresses, excessively long-time storage or storage at high temperatures,can trigger or accentuate this damage. However, it is also shown that if the requirements of standards are fulfilled (minimum coiling diameters) these effects can be neglected.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
In this paper we develop new techniques for revealing geometrical structures in phase space that are valid for aperiodically time dependent dynamical systems, which we refer to as Lagrangian descriptors. These quantities are based on the integration, for a finite time, along trajectories of an intrinsic bounded, positive geometrical and/or physical property of the trajectory itself. We discuss a general methodology for constructing Lagrangian descriptors, and we discuss a “heuristic argument” that explains why this method is successful for revealing geometrical structures in the phase space of a dynamical system. We support this argument by explicit calculations on a benchmark problem having a hyperbolic fixed point with stable and unstable manifolds that are known analytically. Several other benchmark examples are considered that allow us the assess the performance of Lagrangian descriptors in revealing invariant tori and regions of shear. Throughout the paper “side-by-side” comparisons of the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (“time averages”) are carried out and discussed. In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We also perform computations for an explicitly three dimensional, aperiodically time-dependent vector field and an aperiodically time dependent vector field defined as a data set. Comparisons with FTLEs and time averages for these examples are also carried out, with similar conclusions as for the benchmark examples.
Resumo:
In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame
Resumo:
Real time Tritium concentrations in air coming from an ITER-like reactor as source were coupled the European Centre Medium Range Weather Forecast (ECMWF) numerical model with the lagrangian atmospheric dispersion model FLEXPART. This tool ECMWF/FLEXPART was analyzed in normal operating conditions in the Western Mediterranean Basin during 45 days at summer 2010. From comparison with NORMTRI plumes over Western Mediterranean Basin the real time results have demonstrated an overestimation of the corresponding climatologically sequence Tritium concentrations in air outputs, at several distances from the reactor. For these purpose two clouds development patterns were established. The first one was following a cyclonic circulation over the Mediterranean Sea and the second one was based in the cloud delivered over the Interior of the Iberian Peninsula by another stabilized circulation corresponding to a High. One of the important remaining activities defined then, was the tool qualification. The aim of this paper is to present the ECMWF/FLEXPART products confronted with Tritium concentration in air data. For this purpose a database to develop and validate ECMWF/FLEXPART tritium in both assessments has been selected from a NORMTRI run. Similarities and differences, underestimation and overestimation with NORMTRI will allowfor refinement in some features of ECMWF/FLEXPART
Resumo:
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.
Resumo:
Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.
Resumo:
It is clear that in the near future much broader transmissions in the HF band will replace part of the current narrow band links. Our personal view is that a real wide band signal is infeasible in this environment because the usage is typically very intensive and may suffer interferences from all over the world. Therefore, we envision that dynamic multiband transmissions may provide better satisfactory performance. From the very beginning, we observed that real links with our broadband transceiver suffered interferences out of our multiband but within the acquisition bandwidth that degrade the expected performance. Therefore, we concluded that a mitigation structure is required that operates on severely saturated signals as the interference may be of much higher power. In this paper we address a procedure based on Higher Order Crossings (HOC) statistics that are able to extract most of the signal structure in the case where the amplitude is severely distorted and allows the estimation of the interference carrier frequency to command a variable notch filter that mitigates its effect in the analog domain.
Resumo:
We envision that dynamic multiband transmissions taking advantage of the receiver diversity (even for collocated antennas with different polarization or radiation pattern) will create a new paradigm for these links guaranteeing high quality and reliability. However, there are many challenges to face regarding the use of broadband reception where several out of band (with respect to multiband transmission) strong interferers, but still within the acquisition band, may limit dramatically the expected performance. In this paper we address this problem introducing a specific capability of the communication system that is able to mitigate these interferences using analog beamforming principles. Indeed, Higher Order Crossing (HOCs) joint statistics of the Single Input ? Multiple Output (SIMO) system are shown to effectively determine the angle on arrival of the wavefront even operating over highly distorted signals.
Resumo:
Concession contracts in highways often include some kind of clauses (for example, a minimum traffic guarantee) that allow for better management of the business risks. The value of these clauses may be important and should be added to the total value of the concession. However, in these cases, traditional valuation techniques, like the NPV (net present value) of the project, are insufficient. An alternative methodology for the valuation of highway concession is one based on the real options approach. This methodology is generally built on the assumption of the evolution of traffic volume as a GBM (geometric Brownian motion), which is the hypothesis analyzed in this paper. First, a description of the methodology used for the analysis of the existence of unit roots (i.e., the hypothesis of non-stationarity) is provided. The Dickey-Fuller approach has been used, which is the most common test for this kind of analysis. Then this methodology is applied to perform a statistical analysis of traffic series in Spanish toll highways. For this purpose, data on the AADT (annual average daily traffic) on a set of highways have been used. The period of analysis is around thirty years in most cases. The main outcome of the research is that the hypothesis that traffic volume follows a GBM process in Spanish toll highways cannot be rejected. This result is robust, and therefore it can be used as a starting point for the application of the real options theory to assess toll highway concessions.