4 resultados para Low intensity level lasertherapy (LILT)
em CaltechTHESIS
Resumo:
Forced vibration field tests and finite element studies have been conducted on Morrow Point (arch) Dam in order to investigate dynamic dam-water interaction and water compressibility. Design of the data acquisition system incorporates several special features to retrieve both amplitude and phase of the response in a low signal to noise environment. These features contributed to the success of the experimental program which, for the first time, produced field evidence of water compressibility; this effect seems to play a significant role only in the symmetric response of Morrow Point Dam in the frequency range examined. In the accompanying analysis, frequency response curves for measured accelerations and water pressures as well as their resonating shapes are compared to predictions from the current state-of-the-art finite element model for which water compressibility is both included and neglected. Calibration of the numerical model employs the antisymmetric response data since they are only slightly affected by water compressibility, and, after calibration, good agreement to the data is obtained whether or not water compressibility is included. In the effort to reproduce the symmetric response data, on which water compressibility has a significant influence, the calibrated model shows better correlation when water compressibility is included, but the agreement is still inadequate. Similar results occur using data obtained previously by others at a low water level. A successful isolation of the fundamental water resonance from the experimental data shows significantly different features from those of the numerical water model, indicating possible inaccuracy in the assumed geometry and/or boundary conditions for the reservoir. However, the investigation does suggest possible directions in which the numerical model can be improved.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
The surface resistance and the critical magnetic field of lead electroplated on copper were studied at 205 MHz in a half-wave coaxial resonator. The observed surface resistance at a low field level below 4.2°K could be well described by the BCS surface resistance with the addition of a temperature independent residual resistance. The available experimental data suggest that the major fraction of the residual resistance in the present experiment was due to the presence of an oxide layer on the surface. At higher magnetic field levels the surface resistance was found to be enhanced due to surface imperfections.
The attainable rf critical magnetic field between 2.2°K and T_c of lead was found to be limited not by the thermodynamic critical field but rather by the superheating field predicted by the one-dimensional Ginzburg-Landau theory. The observed rf critical field was very close to the expected superheating field, particularly in the higher reduced temperature range, but showed somewhat stronger temperature dependence than the expected superheating field in the lower reduced temperature range.
The rf critical magnetic field was also studied at 90 MHz for pure tin and indium, and for a series of SnIn and InBi alloys spanning both type I and type II superconductivity. The samples were spherical with typical diameters of 1-2 mm and a helical resonator was used to generate the rf magnetic field in the measurement. The results of pure samples of tin and indium showed that a vortex-like nucleation of the normal phase was responsible for the superconducting-to-normal phase transition in the rf field at temperatures up to about 0.98-0.99 T_c' where the ideal superheating limit was being reached. The results of the alloy samples showed that the attainable rf critical fields near T_c were well described by the superheating field predicted by the one-dimensional GL theory in both the type I and type II regimes. The measurement was also made at 300 MHz resulting in no significant change in the rf critical field. Thus it was inferred that the nucleation time of the normal phase, once the critical field was reached, was small compared with the rf period in this frequency range.
Resumo:
This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.
We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.
In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.
In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.
The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.