879 resultados para Computation time delay


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground Delay Programs (GDP) are sometimes cancelled before their initial planned duration and for this reason aircraft are delayed when it is no longer needed. Recovering this delay usually leads to extra fuel consumption, since the aircraft will typically depart after having absorbed on ground their assigned delay and, therefore, they will need to cruise at more fuel consuming speeds. Past research has proposed speed reduction strategy aiming at splitting the GDP-assigned delay between ground and airborne delay, while using the same fuel as in nominal conditions. Being airborne earlier, an aircraft can speed up to nominal cruise speed and recover part of the GDP delay without incurring extra fuel consumption if the GDP is cancelled earlier than planned. In this paper, all GDP initiatives that occurred in San Francisco International Airport during 2006 are studied and characterised by a K-means algorithm into three different clusters. The centroids for these three clusters have been used to simulate three different GDPs at the airport by using a realistic set of inbound traffic and the Future Air Traffic Management Concepts Evaluation Tool (FACET). The amount of delay that can be recovered using this cruise speed reduction technique, as a function of the GDP cancellation time, has been computed and compared with the delay recovered with the current concept of operations. Simulations have been conducted in calm wind situation and without considering a radius of exemption. Results indicate that when aircraft depart early and fly at the slower speed they can recover additional delays, compared to current operations where all delays are absorbed prior to take-off, in the event the GDP cancels early. There is a variability of extra delay recovered, being more significant, in relative terms, for those GDPs with a relatively low amount of demand exceeding the airport capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SELECTOR is a software package for studying the evolution of multiallelic genes under balancing or positive selection while simulating complex evolutionary scenarios that integrate demographic growth and migration in a spatially explicit population framework. Parameters can be varied both in space and time to account for geographical, environmental, and cultural heterogeneity. SELECTOR can be used within an approximate Bayesian computation estimation framework. We first describe the principles of SELECTOR and validate the algorithms by comparing its outputs for simple models with theoretical expectations. Then, we show how it can be used to investigate genetic differentiation of loci under balancing selection in interconnected demes with spatially heterogeneous gene flow. We identify situations in which balancing selection reduces genetic differentiation between population groups compared with neutrality and explain conflicting outcomes observed for human leukocyte antigen loci. These results and three previously published applications demonstrate that SELECTOR is efficient and robust for building insight into human settlement history and evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cortisol awakening response (CAR) is typically measured in the domestic setting. Moderate sample timing inaccuracy has been shown to result in erroneous CAR estimates and such inaccuracy has been shown partially to explain inconsistency in the CAR literature. The need for more reliable measurement of the CAR has recently been highlighted in expert consensus guidelines where it was pointed out that less than 6% of published studies provided electronic-monitoring of saliva sampling time in the post-awakening period. Analyses of a merged data-set of published studies from our laboratory are presented. To qualify for selection, both time of awakening and collection of the first sample must have been verified by electronic-monitoring and sampling commenced within 15 min of awakening. Participants (n = 128) were young (median age of 20 years) and healthy. Cortisol values were determined in the 45 min post-awakening period on 215 sampling days. On 127 days, delay between verified awakening and collection of the first sample was less than 3 min (‘no delay’ group); on 45 days there was a delay of 4–6 min (‘short delay’ group); on 43 days the delay was 7–15 min (‘moderate delay’ group). Cortisol values for verified sampling times accurately mapped on to the typical post-awakening cortisol growth curve, regardless of whether sampling deviated from desired protocol timings. This provides support for incorporating rather than excluding delayed data (up to 15 min) in CAR analyses. For this population the fitted cortisol growth curve equation predicted a mean cortisol awakening level of 6 nmols/l (±1 for 95% CI) and a mean CAR rise of 6 nmols/l (±2 for 95% CI). We also modelled the relationship between real delay and CAR magnitude, when the CAR is calculated erroneously by incorrectly assuming adherence to protocol time. Findings supported a curvilinear hypothesis in relation to effects of sample delay on the CAR. Short delays of 4–6 min between awakening and commencement of saliva sampling resulted an overestimated CAR. Moderate delays of 7–15 min were associated with an underestimated CAR. Findings emphasize the need to employ electronic-monitoring of sampling accuracy when measuring the CAR in the domestic setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares different optimization strategies for the minimization of flight and passenger delays at two levels: pre-tactical, with on-ground delay at origin, and tactical, with airborne delay close to the destination airport. The optimization model is based on the ground holding problem and uses various cost functions. The scenario considered takes place in a busy European airport and includes realistic values of traffic. Uncertainty is introduced in the model for the passenger allocation, minimum time required for turnaround and tactical uncertainty. Performance of the various optimization processes is presented and compared to ratio by schedule results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A regional cross-calibration between the first Delay Doppler altimetry dataset from Cryosat-2 and a retracked Envisat dataset is here presented, in order to test the benefits of the Delay-Doppler processing and to expand the Envisat time series in the coastal ocean. The Indonesian Seas are chosen for the calibration, since the availability of altimetry data in this region is particularly beneficial due to the lack of in-situ measurements and its importance for global ocean circulation. The Envisat data in the region are retracked with the Adaptive Leading Edge Subwaveform (ALES) Retracker, which has been previously validated and applied successfully to coastal sea level research. The study demonstrates that CryoSat-2 is able to decrease the 1-Hz noise of sea level estimations by 0.3 cm within 50 km of the coast, when compared to the ALES-reprocessed Envisat dataset. It also shows that Envisat can be confidently used for detailed oceanographic research after the orbit change of October 2010. Cross-calibration at the crossover points indicates that in the region of study a sea state bias correction equal to 5% of the significant wave height is an acceptable approximation for Delay-Doppler altimetry. The analysis of the joint sea level time series reveals the geographic extent of the semiannual signal caused by Kelvin waves during the monsoon transitions, the larger amplitudes of the annual signal due to the Java Coastal Current and the impact of the strong La Nina event of 2010 on rising sea level trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A regional cross-calibration between the first Delay Doppler altimetry dataset from Cryosat-2 and a retracked Envisat dataset is here presented, in order to test the benefits of the Delay-Doppler processing and to expand the Envisat time series in the coastal ocean. The Indonesian Seas are chosen for the calibration, since the availability of altimetry data in this region is particularly beneficial due to the lack of in-situ measurements and its importance for global ocean circulation. The Envisat data in the region are retracked with the Adaptive Leading Edge Subwaveform (ALES) Retracker, which has been previously validated and applied successfully to coastal sea level research. The study demonstrates that CryoSat-2 is able to decrease the 1-Hz noise of sea level estimations by 0.3 cm within 50 km of the coast, when compared to the ALES-reprocessed Envisat dataset. It also shows that Envisat can be confidently used for detailed oceanographic research after the orbit change of October 2010. Cross-calibration at the crossover points indicates that in the region of study a sea state bias correction equal to 5% of the significant wave height is an acceptable approximation for Delay-Doppler altimetry. The analysis of the joint sea level time series reveals the geographic extent of the semiannual signal caused by Kelvin waves during the monsoon transitions, the larger amplitudes of the annual signal due to the Java Coastal Current and the impact of the strong La Nina event of 2010 on rising sea level trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report addresses delays to freight shippers. Although the focus is on just-in-time (JIT) businesses, the authors also note that non JIT businesses also suffer delays that impact their productivity. The table of contents lists the following headings: chapter 1 - introduction - a trial application: the Des Moines metropolitan area; structure of the report; chapter 2 - reliability at the forefront of freight transport demand - manufacturing and inventory; just-in-time operations in the U.S.; transportation consequences; summary; chapter 3 - JIT operations in Iowa - survey and sample; trucking activity and service; just-in-time truck transportation in Iowa; assessment of factors affecting truck transportation service; summary and conclusions; chapter 4 - travel time uncertainty induced by incidents - a probabilistic model for incident occurrences and durations; calculation of delay; trial application; conclusions; and chapter 5 - conclusions and recommendations - conclusions; recommendations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neoadjuvant chemoradiation (CRT) followed by curative surgery still remains the standard of care for locally advanced rectal cancer (LARC). The main purpose of this multimodal treatment is to achieve a complete pathological tumor response (ypCR), with better survival. The surgery delay after CRT completion seems to increase tumor response and ypCR rate. Usually, time intervals range from 8 to 12 weeks, but the maximum tumor regression may not be seen in rectal adenocarcinomas until several months after CRT. About this issue, we report a case of a 52-year-old man with LARC treated with neoadjuvant CRT who developed, one month after RT completion, an acute myocardial infarction. The need to increase the interval between CRT and surgery for 17 weeks allowed a curative surgery without morbidity and an unexpected complete tumor response in the resected specimen (given the parameters presented in pelvic magnetic resonance imaging (MRI) performed 11 weeks after radiotherapy completion).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.