978 resultados para CLOUDS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

What motivates students to perform and pursue engineering design tasks? This study examines this question by way of three Learning Through Service (LTS) programs: 1) an on-going longitudinal study examining the impacts of service on engineering students, 2) an on-going analysis of an international senior design capstone program, and 3) an on-going evaluation of an international graduate-level research program. The evaluation of these programs incorporates both qualitative and quantitative methods, utilizing surveys, questionnaires, and interviews, which help to provide insight on what motivates students to do engineering design work. The quantitative methods were utilized in analyzing various instruments including: a Readiness assessment inventory, Intercultural Development Inventory, Sustainable Engineering through Service Learning survey, the Impacts of Service on Engineering Students’ survey, Motivational narratives, as well as some analysis for interview text. The results of these instruments help to provide some much needed insight on how prepared students are to participate in engineering programs. Additional qualitative methods include: Word clouds, Motivational narratives, as well as interview analysis. This thesis focused on how these instruments help to determine what motivates engineering students to pursue engineering design tasks. These instruments aim to collect some more in-depth information than the quantitative instruments will allow. Preliminary results suggest that of the 120 interviews analyzed Interest/Enjoyment, Application of knowledge and skills, as well as gaining knowledge are key motivating factors regardless of gender or academic level. Together these findings begin to shed light on what motivates students to perform engineering design tasks, which can be applied for better recruitment and retention in university programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding clouds and their role in climate depends in part on our ability to understand how individual cloud particles respond to environmental conditions. Keeping this objective in mind, a quadrupole trap with thermodynamic control has been designed and constructed in order to create an environment conducive to studying clouds in the laboratory. The quadrupole trap allows a single cloud particle to be suspended for long times. The temperature and water vapor saturation ratio near the trapped particle is controlled by the flow of saturated air through a tube with a discontinuous wall temperature. The design has the unique aspect that the quadrupole electrodes are submerged in heat transfer fluid, completely isolated from the cylindrical levitation volume. This fluid is used in the thermodynamic system to cool the chamber to realistic cloud temperatures, and a heated section of the tube provides for the temperature discontinuity. Thus far, charged water droplets, ranging from about 30-70 microns in diameter have been levitated. In addition, the thermodynamic system has been shown to create the necessary thermal conditions that will create supersaturated conditions in subsequent experiments. These advances will help lead to the next generation of ice nucleation experiments, moving from hemispherical droplets on a substrate to a spherical droplet that is not in contact with any surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

STUDY DESIGN: This is an experimental study on an artificial vertebra model and human cadaveric spine. OBJECTIVE: Characterization of polymethylmethacrylate (PMMA) bone cement distribution in the vertebral body as a function of cement viscosity, bone porosity, and injection speed. Identification of relevant parameters for improved cement flow predictability and leak prevention in vertebroplasty. SUMMARY OF BACKGROUND DATA: Vertebroplasty is an efficient procedure to treat vertebral fractures and stabilize osteoporotic bone in the spine. Severe complications result from bone cement leakage into the spinal canal or the vascular system. Cement viscosity has been identified as an important parameter for leak prevention but the influence of bone structure and injection speed remain obscure. METHODS: An artificial vertebra model based on open porous aluminum foam was used to simulate bone of known porosity. Fifty-six vertebroplasties with 4 different starting viscosity levels and 2 different injection speeds were performed on artificial vertebrae of 3 different porosities. A validation on a human cadaveric spine was executed. The experiments were radiographically monitored and the shape of the cement clouds quantitatively described with the 2 indicators circularity and mean cement spreading distance. RESULTS: An increase in circularity and a decrease in mean cement spreading distance was observed with increasing viscosity, with the most striking change occurring between 50 and 100 Pas. Larger pores resulted in significantly reduced circularity and increased mean cement spreading distance whereas the effect of injection speed on the 2 indicators was not significant. CONCLUSION: Viscosity is the key factor for reducing the risk of PMMA cement leakage and it should be adapted to the degree of osteoporosis encountered in each patient. It may be advisable to opt for a higher starting viscosity but to inject the material at a faster rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While nucleation of solids in supercooled liquids is ubiquitous [15, 65, 66], surface crystallization, the tendency for freezing to begin preferentially at the liquid-gas interface, has remained puzzling [74, 18, 68, 69, 51, 64, 72, 16]. Here we employ high-speed imaging of supercooled water drops to study the phenomenon of heterogeneous surface crystallization. Our geometry avoids the "point-like contact" of prior experiments by providing a simple, symmetric contact line (triple line defined by the substrate-liquid-air interface) for a drop resting on a homogeneous silicon substrate. We examine three possible mechanisms that might explain these laboratory observations: (i) Line Tension at the triple line, (ii) Thermal Gradients within the droplets and (iii) Surface Texture. In our first study we record nearly perfect spatial uniformity in the immersed (liquid-substrate) region and, thereby, no preference for nucleation at the triple line. In our second study, no influence of thermal gradients on the preference for freezing at the triple line was observed. Motivated by the conjectured importance of line tension (τ) [1, 66] for heterogeneous nucleation, we also searched for evidence of a transition to surface crystallization at length scales on the order of δ ∼ τ/σ, where σ is the surface tension [14]; poorly constrained τ [49] leads to δ ranging from microns to nanometers. We demonstrate that nano-scale texture causes a shift in the nucleation to the three-phase contact line, while micro-scale texture does not. The possibility of a critical length scale has implications for the effectiveness of nucleation catalysts, including formation of ice in atmospheric clouds [7].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this report, we attempt to define the capabilities of the infrared satellite remote sensor, Multifunctional Transport Satellite-2 (MTSAT-2) (i.e. a geosynchronous instrument), in characterizing volcanic eruptive behavior in the highly active region of Indonesia. Sulfur dioxide data from NASA's Ozone Monitoring Instrument (OMI) (i.e. a polar orbiting instrument) are presented here for validation of the processes interpreted using the thermal infrared datasets. Data provided from two case studies are analyzed specifically for eruptive products producing large thermal anomalies (i.e. lava flows, lava domes, etc.), volcanic ash and SO2 clouds; three distinctly characteristic and abundant volcanic emissions. Two primary methods used for detection of heat signatures are used and compared in this report including, single-channel thermal radiance (4-µm) and the normalized thermal index (NTI) algorithm. For automated purposes, fixed thresholds must be determined for these methods. A base minimum detection limit (MDL) for single-channel thermal radiance of 2.30E+05 Wm- 2sr-1m-1 and -0.925 for NTI generate false alarm rates of 35.78% and 34.16%, respectively. A spatial comparison method, developed here specifically for use in Indonesia and used as a second parameter for detection, is implemented to address the high false alarm rate. For the single-channel thermal radiance method, the utilization of the spatial comparison method eliminated 100% of the false alarms while maintaining every true anomaly. The NTI algorithm showed similar results with only 2 false alarms remaining. No definitive difference is observed between the two thermal detection methods for automated use; however, the single-channel thermal radiance method coupled with the SO2 mass abundance data can be used to interpret volcanic processes including the identification of lava dome activity at Sinabung as well as the mechanism for the dome emplacement (i.e. endogenous or exogenous). Only one technique, the brightness temperature difference (BTD) method, is used for the detection of ash. Trends of ash area, water/ice area, and their respective concentrations yield interpretations of increased ice formation, aggregation, and sedimentation processes that only a high-temporal resolution instrument like the MTSAT-2 can analyze. A conceptual model of a secondary zone of aggregation occurring in the migrating Kelut ash cloud, which decreases the distal fine-ash component and hazards to flight paths, is presented in this report. Unfortunately, SO2 data was unable to definitively reinforce the concept of a secondary zone of aggregation due to the lack of a sufficient temporal resolution. However, a detailed study of the Kelut SO2 cloud is used to determine that there was no climatic impacts generated from this eruption due to the atmospheric residence times and e-folding rate of ~14 days for the SO2. This report applies the complementary assets offered by utilizing a high-temporal and a high-spatial resolution satellite, and it demonstrates that these two instruments can provide unparalleled observations of dynamic volcanic processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mt Etna's activity has increased during the last decade with a tendency towards more explosive eruptions that produce paroxysmal lava fountains. From January 2011 to April 2012, 25 lava fountaining episodes took place at Etna's New South-East Crater (NSEC). Improved understanding of the mechanism driving these explosive basaltic eruptions is needed to reduce volcanic hazards. This type of activity produces high sulfur dioxide (SO2) emissions, associated with lava flows and ash fall-out, but to date the SO2 emissions associated with Etna's lava fountains have been poorly constrained. The Ultraviolet (UV) Ozone Monitoring Instrument (OMI) on NASA's Aura satellite and the Atmospheric Infrared Sounder (AIRS) on Aqua were used to measure the SO2 loadings. Ground-based data from the Observatoire de Physique du Globe de Clermont-Ferrand (OPGC) L-band Doppler radar, VOLDORAD 2B, used in collaboration with the Italian National Institute of Geophysics and Volcanology in Catania (INGV-CT), also detected the associated ash plumes, giving precise timing and duration for the lava fountains. This study resulted in the first detailed analysis of the OMI and AIRS SO2 data for Etna's lava fountains during the 2011-2012 eruptive cycle. The HYSPLIT trajectory model is used to constrain the altitude of the observed SO2 clouds, and results show that the SO2 emission usually coincided with the lava fountain peak intensity as detected by VOLDORAD. The UV OMI and IR AIRS SO2 retrievals permit quantification of the SO2 loss rate in the volcanic SO2 clouds, many of which were tracked for several days after emission. A first attempt to quantitatively validate AIRS SO2 retrievals with OMI data revealed a good correlation for high altitude SO2 clouds. Using estimates of the emitted SO2 at the time each paroxysm, we observe a correlation with the inter-paroxysm repose time. We therefore suggest that our data set supports the collapsing foam (CF) model [1] as driving mechanism for the paroxysmal events at the NSEC. Using VOLDORAD-based estimates of the erupted magma mass, we observe a large excess of SO2 in the eruption clouds. Satellite measurements indicate that SO2 emissions from Etnean lava fountains can reach the lower stratosphere and hence could pose a hazard to aviation. [1] Parfitt E.A (2004). A discussion of the mechanisms of explosive basaltic eruptions. J. Volcanol. Geotherm. Res. 134, 77-107.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing is a new development that is based on the premise that data and applications are stored centrally and can be accessed through the Internet. Thisarticle sets up a broad analysis of how the emergence of clouds relates to European competition law, network regulation and electronic commerce regulation, which we relate to challenges for the further development of cloud services in Europe: interoperability and data portability between clouds; issues relating to vertical integration between clouds and Internet Service Providers; and potential problems for clouds to operate on the European Internal Market. We find that these issues are not adequately addressed across the legal frameworks that we analyse, and argue for further research into how to better facilitate innovative convergent services such as cloud computing through European policy – especially in light of the ambitious digital agenda that the European Commission has set out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We developed a small version of the Caltech active strand cloud water collector (CASCC) for biogeochemical investigations in ecological applications. The device is battery powered and thus allows operation at locations where mains power is not available. The collector is designed for sampling periods of up to one week, depending on fog frequency. Our new device is equipped with standard sensors for air temperature, relative humidity, wind, and horizontal visibility for fog detection with a low-cost optical sensor. In mountain areas and during times when clouds are thin the installation of the visibility sensor became a key issue, which limits the potential to estimate liquid water content of the sampled fog. Field tests with 5 devices at three different sites in the Swiss Alps (Niesen) and the Jura Mountains (Lägeren, Switzerland) during two extended summer seasons in 2006 and 2007 showed that in almost all cases it was possible to obtain sample volumes which were large enough for the examination of basic inorganic chemistry of the collected cloud water. Collection rates varied typically from 12 to 30 mL h− 1. The fog droplet cutoff diameter is ≈ 6 μm, which is low enough to include all droplet sizes that are relevant for the liquid water content of typical fog types in the collected samples. From theoretical assumptions of the collection efficiency and theoretical droplet spectra it is possible to estimate the liquid water content of the sampled fog or cloud. Our new fog collector can be constructed and operated at relatively low costs. In combination with chemical and isotopic analyses of the sampled water, this allows to quantify nutrient and pollutant fluxes as is typically needed in ecosystem biogeochemistry studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report evidence of a large proglacial lake (Glacial Lake Wright) that existed in Wright Valley in the McMurdo Dry Valleys region of Antarctica at the last glacial maximum (LGM) and in the early Holocene. At its highstands, Glacial Lake Wright would have stretched 50 km and covered c. 210 km(2). Chronology for lake-level changes comes from 30 AMS radiocarbon dates of lacustrine algae preserved in deltas, shorelines, and glaciolacustrine deposits that extend up to 480 m above present-day lakes. Emerging evidence suggests that Glacial Lake Wright was only one of a series of large lakes to occupy the McMurdo Dry Valleys and the valleys fronting the Royal Society Range at the LGM. Although the cause of such high lake levels is not well understood, it is believed to relate to cool, dry conditions which produced fewer clouds, less snowfall, and greater amounts of absorbed radiation, leading to increased meltwater production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.