844 resultados para Optimal time delay


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear distortion in delay-compensated spans for intermediate coupling is studied for the first time. Coupling strengths under -30dB/100m allow distortion reduction using shorter compensation lengths and higher delays. For higher coupling strengths no significant penalty results from shorter compensation lengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The required receiver time window after propagation through few-mode fibre is studied for a broad range of coupling and mode delay span configurations. Under intermediate coupling, effective mode delay compensation is observed for a compensation period of 25km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People are always at risk of making errors when they attempt to retrieve information from memory. An important question is how to create the optimal learning conditions so that, over time, the correct information is learned and the number of mistakes declines. Feedback is a powerful tool, both for reinforcing new learning and correcting memory errors. In 5 experiments, I sought to understand the best procedures for administering feedback during learning. First, I evaluated the popular recommendation that feedback is most effective when given immediately, and I showed that this recommendation does not always hold when correcting errors made with educational materials in the classroom. Second, I asked whether immediate feedback is more effective in a particular case—when correcting false memories, or strongly-held errors that may be difficult to notice even when the learner is confronted with the feedback message. Third, I examined whether varying levels of learner motivation might help to explain cross-experimental variability in feedback timing effects: Are unmotivated learners less likely to benefit from corrective feedback, especially when it is administered at a delay? Overall, the results revealed that there is no best “one-size-fits-all” recommendation for administering feedback; the optimal procedure depends on various characteristics of learners and their errors. As a package, the data are consistent with the spacing hypothesis of feedback timing, although this theoretical account does not successfully explain all of the data in the larger literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although aspects of power generation of many offshore renewable devices are well understood, their dynamic responses under high wind and wave conditions are still to be investigated to a great detail. Output only statistical markers are important for these offshore devices, since access to the device is limited and information about the exposure conditions and the true behaviour of the devices are generally partial, limited, and vague or even absent. The markers can summarise and characterise the behaviour of these devices from their dynamic response available as time series data. The behaviour may be linear or nonlinear and consequently a marker that can track the changes in structural situations can be quite important. These markers can then be helpful in assessing the current condition of the structure and can indicate possible intervention, monitoring or assessment. This paper considers a Delay Vector Variance based marker for changes in a tension leg platform tested in an ocean wave basin for structural changes brought about by single column dampers. The approach is based on dynamic outputs of the device alone and is based on the estimation of the nonlinearity of the output signal. The advantages of the selected marker and its response with changing structural properties are discussed. The marker is observed to be important for monitoring the as- deployed structural condition and is sensitive to changes in such conditions. Influence of exposure conditions of wave loading is also discussed in this study based only on experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Malnutrition has a negative impact on optimal immune function, thus increasing susceptibility to morbidity and mortality among HIV positive patients. Evidence indicates that the prevalence of macro and micronutrient deficiencies (particularly magnesium, selenium, zinc, and vitamin C) has a negative impact on optimal immune function, through the progressive depletion of CD4 T-lymphocyte cells, which thereby increases susceptibility to morbidity and mortality among PLWH. Objective: To assess the short and long term effects of a nutrition sensitive intervention to delay the progression of human immune-deficiency virus (HIV) to AIDS among people living with HIV in Abuja, Nigeria. Methods: A randomized control trial was carried out on 400 PLWH (adult, male and female of different religious background) in Nigeria between January and December 2012. Out of these 400 participants, 100 were randomly selected for the pilot study, which took place over six months (January to June, 2012). The participants in the pilot study overlapped to form part of the scale-up participants (n 400) monitored from June to December 2012. The comparative effect of daily 354.92 kcal/d optimized meals consumed for six and twelve months was ascertained through the nutritional status and biochemical indices of the study participants (n=100 pilot interventions), who were and were not taking the intervention meal. The meal consisted of: Glycine max 50g (Soya bean); Pennisetum americanum 20g (Millet); Moringa oleifera 15g (Moringa); Daucus carota spp. sativa 15g (Carrot). Results: At the end of sixth month intervention, mean CD4 cell count (cell/mm3) for Pre-ART and ART Test groups increased by 6.31% and 12.12% respectively. Mean mid upper arm circumference (MUAC) for Pre-ART and ART Test groups increased by 2.72% and 2.52% within the same period (n 400). Comparatively, participants who overlapped from pilot to scale-up intervention (long term use, n 100) were assessed for 12 months. Mean CD4 cell count (cell/mm3) for Pre-ART and ART test groups increased by 2.21% and 12.14%. Mean MUAC for Pre-ART and ART test groups increased by 2.08% and 3.95% respectively. Moreover, student’s t-test analysis suggests a strong association between the intervention meal, MUAC, and CD4 count on long term use of optimized meal in the group of participants being treated with antiretroviral therapy (ART) (P<0.05). Conclusion: Although the achieved results take the form of specific technology, it suggests that a prolong consumption of the intervention meal will be suitable to sustain the gained improvements in the anthropometric and biochemical indices of PLWHIV in Nigeria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground Delay Programs (GDP) are sometimes cancelled before their initial planned duration and for this reason aircraft are delayed when it is no longer needed. Recovering this delay usually leads to extra fuel consumption, since the aircraft will typically depart after having absorbed on ground their assigned delay and, therefore, they will need to cruise at more fuel consuming speeds. Past research has proposed speed reduction strategy aiming at splitting the GDP-assigned delay between ground and airborne delay, while using the same fuel as in nominal conditions. Being airborne earlier, an aircraft can speed up to nominal cruise speed and recover part of the GDP delay without incurring extra fuel consumption if the GDP is cancelled earlier than planned. In this paper, all GDP initiatives that occurred in San Francisco International Airport during 2006 are studied and characterised by a K-means algorithm into three different clusters. The centroids for these three clusters have been used to simulate three different GDPs at the airport by using a realistic set of inbound traffic and the Future Air Traffic Management Concepts Evaluation Tool (FACET). The amount of delay that can be recovered using this cruise speed reduction technique, as a function of the GDP cancellation time, has been computed and compared with the delay recovered with the current concept of operations. Simulations have been conducted in calm wind situation and without considering a radius of exemption. Results indicate that when aircraft depart early and fly at the slower speed they can recover additional delays, compared to current operations where all delays are absorbed prior to take-off, in the event the GDP cancels early. There is a variability of extra delay recovered, being more significant, in relative terms, for those GDPs with a relatively low amount of demand exceeding the airport capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cortisol awakening response (CAR) is typically measured in the domestic setting. Moderate sample timing inaccuracy has been shown to result in erroneous CAR estimates and such inaccuracy has been shown partially to explain inconsistency in the CAR literature. The need for more reliable measurement of the CAR has recently been highlighted in expert consensus guidelines where it was pointed out that less than 6% of published studies provided electronic-monitoring of saliva sampling time in the post-awakening period. Analyses of a merged data-set of published studies from our laboratory are presented. To qualify for selection, both time of awakening and collection of the first sample must have been verified by electronic-monitoring and sampling commenced within 15 min of awakening. Participants (n = 128) were young (median age of 20 years) and healthy. Cortisol values were determined in the 45 min post-awakening period on 215 sampling days. On 127 days, delay between verified awakening and collection of the first sample was less than 3 min (‘no delay’ group); on 45 days there was a delay of 4–6 min (‘short delay’ group); on 43 days the delay was 7–15 min (‘moderate delay’ group). Cortisol values for verified sampling times accurately mapped on to the typical post-awakening cortisol growth curve, regardless of whether sampling deviated from desired protocol timings. This provides support for incorporating rather than excluding delayed data (up to 15 min) in CAR analyses. For this population the fitted cortisol growth curve equation predicted a mean cortisol awakening level of 6 nmols/l (±1 for 95% CI) and a mean CAR rise of 6 nmols/l (±2 for 95% CI). We also modelled the relationship between real delay and CAR magnitude, when the CAR is calculated erroneously by incorrectly assuming adherence to protocol time. Findings supported a curvilinear hypothesis in relation to effects of sample delay on the CAR. Short delays of 4–6 min between awakening and commencement of saliva sampling resulted an overestimated CAR. Moderate delays of 7–15 min were associated with an underestimated CAR. Findings emphasize the need to employ electronic-monitoring of sampling accuracy when measuring the CAR in the domestic setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares different optimization strategies for the minimization of flight and passenger delays at two levels: pre-tactical, with on-ground delay at origin, and tactical, with airborne delay close to the destination airport. The optimization model is based on the ground holding problem and uses various cost functions. The scenario considered takes place in a busy European airport and includes realistic values of traffic. Uncertainty is introduced in the model for the passenger allocation, minimum time required for turnaround and tactical uncertainty. Performance of the various optimization processes is presented and compared to ratio by schedule results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A regional cross-calibration between the first Delay Doppler altimetry dataset from Cryosat-2 and a retracked Envisat dataset is here presented, in order to test the benefits of the Delay-Doppler processing and to expand the Envisat time series in the coastal ocean. The Indonesian Seas are chosen for the calibration, since the availability of altimetry data in this region is particularly beneficial due to the lack of in-situ measurements and its importance for global ocean circulation. The Envisat data in the region are retracked with the Adaptive Leading Edge Subwaveform (ALES) Retracker, which has been previously validated and applied successfully to coastal sea level research. The study demonstrates that CryoSat-2 is able to decrease the 1-Hz noise of sea level estimations by 0.3 cm within 50 km of the coast, when compared to the ALES-reprocessed Envisat dataset. It also shows that Envisat can be confidently used for detailed oceanographic research after the orbit change of October 2010. Cross-calibration at the crossover points indicates that in the region of study a sea state bias correction equal to 5% of the significant wave height is an acceptable approximation for Delay-Doppler altimetry. The analysis of the joint sea level time series reveals the geographic extent of the semiannual signal caused by Kelvin waves during the monsoon transitions, the larger amplitudes of the annual signal due to the Java Coastal Current and the impact of the strong La Nina event of 2010 on rising sea level trends.