10 resultados para time-interval-unit
em Aston University Research Archive
Resumo:
Lichenometry is one of many techniques now available for estimating the elapsed time since the exposure of a substratum. Its advantages include an ability to date surfaces during the last 500 years, a time interval in which radiocarbon dating is least efficient, and provides a quick, cheap, and relatively accurate date for a substratum.
Resumo:
A study on heat pump thermodynamic characteristics has been made in the laboratory on a specially designed and instrumented air to water heat pump system. The design, using refrigerant R12, was based on the requirement to produce domestic hot water at a temperature of about 50 °C and was assembled in the laboratory. All the experimental data were fed to a microcomputer and stored on disk automatically from appropriate transducers via amplifier and 16 channel analogue to digital converters. The measurements taken were R12 pressures and temperatures, water and R12 mass flow rates, air speed, fan and compressor input powers, water and air inlet and outlet temperatures, wet and dry bulb temperatures. The time interval between the observations could be varied. The results showed, as expected, that the COP was higher at higher air inlet temperatures and at lower hot water output temperatures. The optimum air speed was found to be at a speed when the fan input power was about 4% of the condenser heat output. It was also found that the hot water can be produced at a temperature higher than the appropriate R12 condensing temperature corresponding to condensing pressure. This was achieved by condenser design to take advantage of discharge superheat and by further heating the water using heat recovery from the compressor. Of the input power to the compressor, typically about 85% was transferred to the refrigerant, 50 % by the compression work and 35% due to the heating of the refrigerant by the cylinder wall, and the remaining 15% (of the input power) was rejected to the cooling medium. The evaporator effectiveness was found to be about 75% and sensitive to the air speed. Using the data collected, a steady state computer model was developed. For given input conditions s air inlet temperature, air speed, the degree of suction superheat , water inlet and outlet temperatures; the model is capable of predicting the refrigerant cycle, compressor efficiency, evaporator effectiveness, condenser water flow rate and system Cop.
Resumo:
Erbium-doped fibre amplifiers (EDFA’s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional inverse Stefan problem for the heat equation by extending the MFS proposed in [5] for the one-dimensional direct Stefan problem. The sources are placed outside the space domain of interest and in the time interval (-T, T). Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
In this paper we investigate an application of the method of fundamental solutions (MFS) to transient heat conduction. In almost all of the previously proposed MFS for time-dependent heat conduction the fictitious sources are located outside the time-interval of interest. In our case, however, these sources are instead placed outside the space domain of interest in the same manner as is done for stationary heat conduction. A denseness result for this method is discussed and the method is numerically tested showing that accurate numerical results can be obtained. Furthermore, a test example with boundary singularities shows that it is advisable to remove such singularities before applying the MFS.
Resumo:
Even simple hybrid systems like the classic bouncing ball can exhibit Zeno behaviors. The existence of this type of behavior has so far forced simulators to either ignore some events or risk looping indefinitely. This in turn forces modelers to either insert ad hoc restrictions to circumvent Zeno behavior or to abandon hybrid modeling. To address this problem, we take a fresh look at event detection and localization. A key insight that emerges from this investigation is that an enclosure for a given time interval can be valid independently of the occurrence of a given event. Such an event can then even occur an unbounded number of times, thus making it possible to handle certain types of Zeno behavior.
Resumo:
Background: The aim was to investigate the effect on the measured amplitude of accommodation and repeatability of using the minus lens technique with the target at distance or near. Methods: Forty-three students (average age: 21.17 ± 1.50 years, 35 female) had their amplitude of accommodation measured with minus lenses on top of their distance correction in a trial frame with the target at far (6.0m) or near (0.4m). The minus lens power was gradually added with steps of 0.25D. Measurements were taken on two occasions at each distance, which were separated by a time interval of at least 24 hours. Results: The measured amplitude at six metres was significantly lower than that with the target at 40cm, by 1.56 ± 1.17D (p < 0.001) and this varied between individuals (r = 0.716, intraclass correlation coefficient = 0.439). With either target distance, repeated measurement was highly correlated (r > 0.9) but the agreement was better at 6.0m (±0.74D) than at 40cm (± 0.92D). Conclusion: The measurements of the amplitude of accommodation with the minus lens technique using targets at far or near are not comparable and the difference between the target distances may provide clinically relevant information. © 2013 Optometrists Association Australia.
Resumo:
Background. The secondary structure of folded RNA sequences is a good model to map phenotype onto genotype, as represented by the RNA sequence. Computational studies of the evolution of ensembles of RNA molecules towards target secondary structures yield valuable clues to the mechanisms behind adaptation of complex populations. The relationship between the space of sequences and structures, the organization of RNA ensembles at mutation-selection equilibrium, the time of adaptation as a function of the population parameters, the presence of collective effects in quasispecies, or the optimal mutation rates to promote adaptation all are issues that can be explored within this framework. Results. We investigate the effect of microscopic mutations on the phenotype of RNA molecules during their in silico evolution and adaptation. We calculate the distribution of the effects of mutations on fitness, the relative fractions of beneficial and deleterious mutations and the corresponding selection coefficients for populations evolving under different mutation rates. Three different situations are explored: the mutation-selection equilibrium (optimized population) in three different fitness landscapes, the dynamics during adaptation towards a goal structure (adapting population), and the behavior under periodic population bottlenecks (perturbed population). Conclusions. The ratio between the number of beneficial and deleterious mutations experienced by a population of RNA sequences increases with the value of the mutation rate µ at which evolution proceeds. In contrast, the selective value of mutations remains almost constant, independent of µ, indicating that adaptation occurs through an increase in the amount of beneficial mutations, with little variations in the average effect they have on fitness. Statistical analyses of the distribution of fitness effects reveal that small effects, either beneficial or deleterious, are well described by a Pareto distribution. These results are robust under changes in the fitness landscape, remarkably when, in addition to selecting a target secondary structure, specific subsequences or low-energy folds are required. A population perturbed by bottlenecks behaves similarly to an adapting population, struggling to return to the optimized state. Whether it can survive in the long run or whether it goes extinct depends critically on the length of the time interval between bottlenecks. © 2010 Stich et al; licensee BioMed Central Ltd.
Resumo:
Even simple hybrid automata like the classic bouncing ball can exhibit Zeno behavior. The existence of this type of behavior has so far forced a large class of simulators to either ignore some events or risk looping indefinitely. This in turn forces modelers to either insert ad-hoc restrictions to circumvent Zeno behavior or to abandon hybrid automata. To address this problem, we take a fresh look at event detection and localization. A key insight that emerges from this investigation is that an enclosure for a given time interval can be valid independent of the occurrence of a given event. Such an event can then even occur an unbounded number of times. This insight makes it possible to handle some types of Zeno behavior. If the post-Zeno state is defined explicitly in the given model of the hybrid automaton, the computed enclosure covers the corresponding trajectory that starts from the Zeno point through a restarted evolution.
Resumo:
I examine the predictability of dividend cuts based on the time interval between dividend announcement dates using a large dataset of US firms from 1971 to 2014. The longer the time interval between dividend announcements, the larger the probability of a cut in the dividend per share, consistent with the view that firms delay the release of bad news.