959 resultados para Packet Inter-arrival Time
Resumo:
Das Aerosolmassenspektrometer SPLAT (Single Particle Laser Ablation Time-of-Flight Mass Spectrometer) ist in der Lage, die Größe einzelner Aerosolpartikel in einem Größenbereich von 0,3 µm bis 3 µm zu bestimmen und gleichzeitig chemisch zu analysieren. Die Größenbestimmung erfolgt durch Streulichtmessung und Bestimmung der Flugzeit der Partikel zwischen zwei kontinuierlichen Laserstrahlen. Durch Kalibrationsmessungen kann auf den aerodynamischen Durchmesser der Partikel geschlossen werden. Kurzzeitig nach der Streulichtdetektion werden die Partikel durch einen hochenergetischen gepulsten UV-Laser verdampft und ionisiert. Die Flugzeit der Partikel zwischen den kontinuierlichen Laserstrahlen wird dazu benutzt, die Ankunftszeit der Partikel in der Ionenquelle zu berechnen und den UV-Laserpuls zu zünden. Die entstandenen Ionen werden in einem bipolaren Flugzeitmassen¬spektrometer nachgewiesen. Durch die Laserablation/Ionisation ist das SPLAT in der Lage, auch schwer verdampfbare Komponenten des atmosphärischen Aerosols - wie etwa Minerale oder Metalle - nachzuweisen. Das SPLAT wurde während dieser Arbeit vollständig neu entwickelt und aufgebaut. Dazu gehörten das Vakuum- und Einlasssystem, die Partikeldetektion, die Ionenquelle und das Massen-spektrometer. Beim Design des SPLAT wurde vor allem auf den späteren Feldeinsatz Wert gelegt, was besondere Anforderungen an Mechanik und Elektronik stellte. Die Charakterisierung der einzelnen Komponenten sowie des gesamten Instruments wurde unter Laborbedingungen durchgeführt. Dabei wurde u.a. Detektionseffizienzen des Instruments ermittelt, die abhängig von der Größe der Partikel sind. Bei sphärischen Partikeln mit einem Durchmesser von 600 nm wurden ca. 2 % der Partikel die in das Instrument gelangten, detektiert und chemisch analysiert. Die Fähigkeit zum Feldeinsatz hat das SPLAT im Februar/März 2006 während einer internationalen Messkampagne auf dem Jungfraujoch in der Schweiz bewiesen. Auf dieser hochalpinen Forschungsstation in einer Höhe von ca. 3580 m fand das SPLAT mineralische und metallische Komponenten in den Aerosolpartikeln. Das SPLAT ist ein vielfältig einsetzbares Instrument und erlaubt vor allem in Kombination mit Aerosolmassenspektrometern, die mit thermischer Verdampfung und Elektronenstoßionisation arbeiten, einen Erkenntnisgewinn in der Analytik atmosphärischer Aerosolpartikel.
Resumo:
We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.
Resumo:
Pulse-wave velocity (PWV) is considered as the gold-standard method to assess arterial stiffness, an independent predictor of cardiovascular morbidity and mortality. Current available devices that measure PWV need to be operated by skilled medical staff, thus, reducing the potential use of PWV in the ambulatory setting. In this paper, we present a new technique allowing continuous, unsupervised measurements of pulse transit times (PTT) in central arteries by means of a chest sensor. This technique relies on measuring the propagation time of pressure pulses from their genesis in the left ventricle to their later arrival at the cutaneous vasculature on the sternum. Combined thoracic impedance cardiography and phonocardiography are used to detect the opening of the aortic valve, from which a pre-ejection period (PEP) value is estimated. Multichannel reflective photoplethysmography at the sternum is used to detect the distal pulse-arrival time (PAT). A PTT value is then calculated as PTT = PAT - PEP. After optimizing the parameters of the chest PTT calculation algorithm on a nine-subject cohort, a prospective validation study involving 31 normo- and hypertensive subjects was performed. 1/chest PTT correlated very well with the COMPLIOR carotid to femoral PWV (r = 0.88, p < 10 (-9)). Finally, an empirical method to map chest PTT values onto chest PWV values is explored.
Resumo:
Die neu entwickelte Belegungsstrategie basiert auf prognostizierten Verweildauern (VWD) und Zwischenankunftszeiten der Ladeeinheiten (LE) des Sortiments. Für jede Ladeeinheit, die im Lager ankommt wird berechnet, wie viele Ladeeinheiten während der Verweildauer dieser aktuellen Ladeeinheit voraussichtlich ankommen und das Lager auch in diesem VWD-Zeitraum wieder verlassen. In Abhängigkeit der aktuellen Lagerbelegung werden für die in dem Zeitraum ankommenden Ladeeinheiten Lagerfächer reserviert und erst anschließend die eingehende Ladeeinheit in das fahrzeitgünstigste, freie und nicht reservierte Lagerfach eingelagert. Eine zusätzliche Berücksichtigung des Energiebedarfes für die Ein- und Auslagerung ist möglich. Das prognosebasierte Reservierungsverfahren wurde neben den gängigen Belegungsstrategien in einem parametrisierbaren Simulationsmodell umgesetzt. Die Belegungsstrategien wurden anhand verschiedener Szenarien getestet und verglichen. Ein zusätzlich entwickelter Benchmark gibt Auskunft über die Qualität der Simulationsergebnisse.
Resumo:
In spring 2012 CERN provided two weeks of a short bunch proton beam dedicated to the neutrino velocity measurement over a distance of 730 km. The OPERA neutrino experiment at the underground Gran Sasso Laboratory used an upgraded setup compared to the 2011 measurements, improving the measurement time accuracy. An independent timing system based on the Resistive Plate Chambers was exploited providing a time accuracy of ∼1 ns. Neutrino and anti-neutrino contributions were separated using the information provided by the OPERA magnetic spectrometers. The new analysis profited from the precision geodesy measurements of the neutrino baseline and of the CNGS/LNGS clock synchronization. The neutrino arrival time with respect to the one computed assuming the speed of light in vacuum is found to be δtν≡TOFc−TOFν=(0.6±0.4 (stat.)±3.0 (syst.)) ns and δtν¯≡TOFc−TOFν¯=(1.7±1.4 (stat.)±3.1 (syst.)) ns for νμ and ν¯μ, respectively. This corresponds to a limit on the muon neutrino velocity with respect to the speed of light of −1.8×10−6<(vν−c)/c<2.3×10−6 at 90% C.L. This new measurement confirms with higher accuracy the revised OPERA result.
Resumo:
Aberrations of the acoustic wave front, caused by spatial variations of the speed-of-sound, are a main limiting factor to the diagnostic power of medical ultrasound imaging. If not accounted for, aberrations result in low resolution and increased side lobe level, over all reducing contrast in deep tissue imaging. Various techniques have been proposed for quantifying aberrations by analysing the arrival time of coherent echoes from so-called guide stars or beacons. In situations where a guide star is missing, aperture-based techniques may give ambiguous results. Moreover, they are conceptually focused on aberrators that can be approximated as a phase screen in front of the probe. We propose a novel technique, where the effect of aberration is detected in the reconstructed image as opposed to the aperture data. The varying local echo phase when changing the transmit beam steering angle directly reflects the varying arrival time of the transmit wave front. This allows sensing the angle-dependent aberration delay in a spatially resolved way, and thus aberration correction for a spatially distributed volume aberrator. In phantoms containing a cylindrical aberrator, we achieved location-independent diffraction-limited resolution as well as accurate display of echo location based on reconstructing the speed-of-sound spatially resolved. First successful volunteer results confirm the clinical potential of the proposed technique.
Resumo:
This paper presents an operational concept for Air Traffic Management, and in particular arrival management, in which aircraft are permitted to operate in a manner consistent with current optimal aircraft operating techniques. The proposed concept allows aircraft to descend in the fuel efficient path managed mode and with arrival time not actively controlled. It will be demonstrated how the associated uncertainty in the time dimension of the trajectory can be managed through the application of multiple metering points strategically chosen along the trajectory. The proposed concept does not make assumptions on aircraft equipage (e.g. time of arrival control), but aims at handling mixed-equipage scenarios that most likely will remain far into the next decade and arguably beyond.
Resumo:
New concepts in air navigation have been introduced recently. Among others, are the concepts of trajectory optimization, 4D trajectories, RBT (Reference Business Trajectory), TBO (trajectory based operations), CDA (Continuous Descent Approach) and ACDA (Advanced CDA), conflict resolution, arrival time (AMAN), introduction of new aircraft (UAVs, UASs) in air space, etc. Although some of these concepts are new, the future Air Traffic Management will maintain the four ATM key performance areas such as Safety, Capacity, Efficiency, and Environmental impact. So much, the performance of the ATM system is directly related to the accuracy with which the future evolution of the traffic can be predicted. In this sense, future air traffic management will require a variety of support tools to provide suitable help to users and engineers involved in the air space management. Most of these tools are based on an appropriate trajectory prediction module as main component. Therefore, the purposes of these tools are related with testing and evaluation of any air navigation concept before they become fully operative. The aim of this paper is to provide an overview to the design of a software tool useful to estimate aircraft trajectories adapted to air navigation concepts. Other usage of the tool, like controller design, vertical navigation assessment, procedures validation and hardware and software in the loop are available in the software tool. The paper will show the process followed to design the tool, the software modules needed to perform accurately and the process followed to validate the output data.
Resumo:
Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).
Resumo:
El consumo de combustible en un automóvil es una característica que se intenta mejorar continuamente debido a los precios del carburante y a la creciente conciencia medioambiental. Esta tesis doctoral plantea un algoritmo de optimización del consumo que tiene en cuenta las especificaciones técnicas del vehículo, el perfil de orografía de la carretera y el tráfico presente en ella. El algoritmo de optimización calcula el perfil de velocidad óptima que debe seguir el vehículo para completar un recorrido empleando un tiempo de viaje especificado. El cálculo del perfil de velocidad óptima considera los valores de pendiente de la carretera así como también las condiciones de tráfico vehicular de la franja horaria en que se realiza el recorrido. El algoritmo de optimización reacciona ante condiciones de tráfico cambiantes y adapta continuamente el perfil óptimo de velocidad para que el vehículo llegue al destino cumpliendo el horario de llegada establecido. La optimización de consumo es aplicada en vehículos convencionales de motor de combustión interna y en vehículos híbridos tipo serie. Los datos de consumo utilizados por el algoritmo de optimización se obtienen mediante la simulación de modelos cuasi-estáticos de los vehículos. La técnica de minimización empleada por el algoritmo es la Programación Dinámica. El algoritmo divide la optimización del consumo en dos partes claramente diferenciadas y aplica la Programación Dinámica sobre cada una de ellas. La primera parte corresponde a la optimización del consumo del vehículo en función de las condiciones de tráfico. Esta optimización calcula un perfil de velocidad promedio que evita, cuando es posible, las retenciones de tráfico. El tiempo de viaje perdido durante una retención de tráfico debe recuperarse a través de un aumento posterior de la velocidad promedio que incrementaría el consumo del vehículo. La segunda parte de la optimización es la encargada del cálculo de la velocidad óptima en función de la orografía y del tiempo de viaje disponible. Dado que el consumo de combustible del vehículo se incrementa cuando disminuye el tiempo disponible para finalizar un recorrido, esta optimización utiliza factores de ponderación para modular la influencia que tiene cada una de estas dos variables en el proceso de minimización. Aunque los factores de ponderación y la orografía de la carretera condicionan el nivel de ahorro de la optimización, los perfiles de velocidad óptima calculados logran ahorros de consumo respecto de un perfil de velocidad constante que obtiene el mismo tiempo de recorrido. Las simulaciones indican que el ahorro de combustible del vehículo convencional puede lograr hasta un 8.9% mientras que el ahorro de energía eléctrica del vehículo híbrido serie un 2.8%. El algoritmo fusiona la optimización en función de las condiciones del tráfico y la optimización en función de la orografía durante el cálculo en tiempo real del perfil óptimo de velocidad. La optimización conjunta se logra cuando el perfil de velocidad promedio resultante de la optimización en función de las condiciones de tráfico define los valores de los factores de ponderación de la optimización en función de la orografía. Aunque el nivel de ahorro de la optimización conjunta depende de las condiciones de tráfico, de la orografía, del tiempo de recorrido y de las características propias del vehículo, las simulaciones indican ahorros de consumo superiores al 6% en ambas clases de vehículo respecto a optimizaciones que no logran evitar retenciones de tráfico en la carretera. ABSTRACT Fuel consumption of cars is a feature that is continuously being improved due to the fuel price and an increasing environmental awareness. This doctoral dissertation describes an optimization algorithm to decrease the fuel consumption taking into account the technical specifications of the vehicle, the terrain profile of the road and the traffic conditions of the trip. The algorithm calculates the optimal speed profile that completes a trip having a specified travel time. This calculation considers the road slope and the expected traffic conditions during the trip. The optimization algorithm is also able to react to changing traffic conditions and tunes the optimal speed profile to reach the destination within the specified arrival time. The optimization is applied on a conventional vehicle and also on a Series Hybrid Electric vehicle (SHEV). The fuel consumption optimization algorithm uses data obtained from quasi-static simulations. The algorithm is based on Dynamic Programming and divides the fuel consumption optimization problem into two parts. The first part of the optimization process reduces the fuel consumption according to foreseeable traffic conditions. It calculates an average speed profile that tries to avoid, if possible, the traffic jams on the road. Traffic jams that delay drivers result in higher vehicle speed to make up for lost time. A higher speed of the vehicle within an already defined time scheme increases fuel consumption. The second part of the optimization process is in charge of calculating the optimal speed profile according to the road slope and the remaining travel time. The optimization tunes the fuel consumption and travel time relevancies by using two penalty factors. Although the optimization results depend on the road slope and the travel time, the optimal speed profile produces improvements of 8.9% on the fuel consumption of the conventional car and of 2.8% on the spent energy of the hybrid vehicle when compared with a constant speed profile. The two parts of the optimization process are combined during the Real-Time execution of the algorithm. The average speed profile calculated by the optimization according to the traffic conditions provides values for the two penalty factors utilized by the second part of the optimization process. Although the savings depend on the road slope, traffic conditions, vehicle features, and the remaining travel time, simulations show that this joint optimization process can improve the energy consumption of the two vehicles types by more than 6%.
Resumo:
A novel pedestrian motion prediction technique is presented in this paper. Its main achievement regards to none previous observation, any knowledge of pedestrian trajectories nor the existence of possible destinations is required; hence making it useful for autonomous surveillance applications. Prediction only requires initial position of the pedestrian and a 2D representation of the scenario as occupancy grid. First, it uses the Fast Marching Method (FMM) to calculate the pedestrian arrival time for each position in the map and then, the likelihood that the pedestrian reaches those positions is estimated. The technique has been tested with synthetic and real scenarios. In all cases, accurate probability maps as well as their representative graphs were obtained with low computational cost.
Resumo:
Owls and other animals, including humans, use the difference in arrival time of sounds between the ears to determine the direction of a sound source in the horizontal plane. When an interaural time difference (ITD) is conveyed by a narrowband signal such as a tone, human beings may fail to derive the direction represented by that ITD. This is because they cannot distinguish the true ITD contained in the signal from its phase equivalents that are ITD ± nT, where T is the period of the stimulus tone and n is an integer. This uncertainty is called phase-ambiguity. All ITD-sensitive neurons in birds and mammals respond to an ITD and its phase equivalents when the ITD is contained in narrowband signals. It is not known, however, if these animals show phase-ambiguity in the localization of narrowband signals. The present work shows that barn owls (Tyto alba) experience phase-ambiguity in the localization of tones delivered by earphones. We used sound-induced head-turning responses to measure the sound-source directions perceived by two owls. In both owls, head-turning angles varied as a sinusoidal function of ITD. One owl always pointed to the direction represented by the smaller of the two ITDs, whereas a second owl always chose the direction represented by the larger ITD (i.e., ITD − T).
Resumo:
A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory.
Resumo:
We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.
Resumo:
This thesis considers two basic aspects of impact damage in composite materials, namely damage severity discrimination and impact damage location by using Acoustic Emissions (AE) and Artificial Neural Networks (ANNs). The experimental work embodies a study of such factors as the application of AE as Non-destructive Damage Testing (NDT), and the evaluation of ANNs modelling. ANNs, however, played an important role in modelling implementation. In the first aspect of the study, different impact energies were used to produce different level of damage in two composite materials (T300/914 and T800/5245). The impacts were detected by their acoustic emissions (AE). The AE waveform signals were analysed and modelled using a Back Propagation (BP) neural network model. The Mean Square Error (MSE) from the output was then used as a damage indicator in the damage severity discrimination study. To evaluate the ANN model, a comparison was made of the correlation coefficients of different parameters, such as MSE, AE energy, AE counts, etc. MSE produced an outstanding result based on the best performance of correlation. In the second aspect, a new artificial neural network model was developed to provide impact damage location on a quasi-isotropic composite panel. It was successfully trained to locate impact sites by correlating the relationship between arriving time differences of AE signals at transducers located on the panel and the impact site coordinates. The performance of the ANN model, which was evaluated by calculating the distance deviation between model output and real location coordinates, supports the application of ANN as an impact damage location identifier. In the study, the accuracy of location prediction decreased when approaching the central area of the panel. Further investigation indicated that this is due to the small arrival time differences, which defect the performance of ANN prediction. This research suggested increasing the number of processing neurons in the ANNs as a practical solution.