956 resultados para Direct digital detector
Resumo:
Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.
Resumo:
This report is a PhD dissertation proposal to study the in-cylinder temperature and heat flux distributions within a gasoline turbocharged direct injection (GTDI) engine. Recent regulations requiring automotive manufacturers to increase the fuel efficiency of their vehicles has led to great technological achievements in internal combustion engines. These achievements have increased the power density of gasoline engines dramatically in the last two decades. Engine technologies such as variable valve timing (VVT), direct injection (DI), and turbocharging have significantly improved engine power-to-weight and power-to-displacement ratios. A popular trend for increasing vehicle fuel economy in recent years has been to downsize the engine and add VVT, DI, and turbocharging technologies so that a lighter more efficient engine can replace a larger, heavier one. With the added power density, thermal management of the engine becomes a more important issue. Engine components are being pushed to their temperature limits. Therefore it has become increasingly important to have a greater understanding of the parameters that affect in-cylinder temperatures and heat transfer. The proposed research will analyze the effects of engine speed, load, relative air-fuel ratio (AFR), and exhaust gas recirculation (EGR) on both in-cylinder and global temperature and heat transfer distributions. Additionally, the effect of knocking combustion and fuel spray impingement will be investigated. The proposed research will be conducted on a 3.5 L six cylinder GTDI engine. The research engine will be instrumented with a large number of sensors to measure in-cylinder temperatures and pressures, as well as, the temperature, pressure, and flow rates of energy streams into and out of the engine. One of the goals of this research is to create a model that will predict the energy distribution to the crankshaft, exhaust, and cooling system based on normalized values for engine speed, load, AFR, and EGR. The results could be used to aid in the engine design phase for turbocharger and cooling system sizing. Additionally, the data collected can be used for validation of engine simulation models, since in-cylinder temperature and heat flux data is not readily available in the literature..
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
Experimental work and analysis was done to investigate engine startup robustness and emissions of a flex-fuel spark ignition (SI) direct injection (DI) engine. The vaporization and other characteristics of ethanol fuel blends present a challenge at engine startup. Strategies to reduce the enrichment requirements for the first engine startup cycle and emissions for the second and third fired cycle at 25°C ± 1°C engine and intake air temperature were investigated. Research work was conducted on a single cylinder SIDI engine with gasoline and E85 fuels, to study the effect on first fired cycle of engine startup. Piston configurations that included a compression ratio change (11 vs 15.5) and piston geometry change (flattop vs bowl) were tested, along with changes in intake cam timing (95,110,125) and fuel pressure (0.4 MPa vs 3 MPa). The goal was to replicate the engine speed, manifold pressure, fuel pressure and testing temperature from an engine startup trace for investigating the first fired cycle for the engine. Results showed bowl piston was able to enable lower equivalence ratio engine starts with gasoline fuel, while also showing lower IMEP at the same equivalence ratio compared to flat top piston. With E85, bowl piston showed reduced IMEP as compression ratio increased at the same equivalence ratio. A preference for constant intake valve timing across fuels seemed to indicate that flattop piston might be a good flex-fuel piston. Significant improvements were seen with higher CR bowl piston with high fuel pressure starts, but showed no improvement with low fuel pressures. Simulation work was conducted to analyze initial three cycles of engine startup in GT-POWER for the same set of hardware used in the experimentations. A steady state validated model was modified for startup conditions. The results of which allowed an understanding of the relative residual levels and IMEP at the test points in the cam phasing space. This allowed selecting additional test points that enable use of higher residual levels, eliminating those with smaller trapped mass incapable of producing required IMEP for proper engine turnover. The second phase of experimental testing results for 2nd and 3rd startup cycle revealed both E10 and E85 prefer the same SOI of 240°bTDC at second and third startup cycle for the flat top piston and high injection pressures. E85 fuel optimal cam timing for startup showed that it tolerates more residuals compared to E10 fuel. Higher internal residuals drives down the Ø requirement for both fuels up to their combustion stability limit, this is thought to be direct benefit to vaporization due to increased cycle start temperature. Benefits are shown for an advance IMOP and retarded EMOP strategy at engine startup. Overall the amount of residuals preferred by an engine for E10 fuel at startup is thought to be constant across engine speed, thus could enable easier selection of optimized cam positions across the startup speeds.
Resumo:
Clouds are one of the most influential elements of weather on the earth system, yet they are also one of the least understood. Understanding their composition and behavior at small scales is critical to understanding and predicting larger scale feedbacks. Currently, the best method to study clouds on the microscale is through airborne in situ measurements using optical instruments capable of resolving clouds on the individual particle level. However, current instruments are unable to sufficiently resolve the scales important to cloud evolution and behavior. The Holodec is a new generation of optical cloud instrument which uses digital inline holography to overcome many of the limitations of conventional instruments. However, its performance and reliability was limited due to several deficiencies in its original design. These deficiencies were addressed and corrected to advance the instrument from the prototype stage to an operational instrument. In addition, the processing software used to reconstruct and analyze digitally recorded holograms was improved upon to increase robustness and ease of use.
Resumo:
Tissue engineering and regenerative medicine have emerged in an effort to generate replacement tissues capable of restoring native tissue structure and function, but because of the complexity of biologic system, this has proven to be much harder than originally anticipated. Silica based bioactive glasses are popular as biomaterials because of their ability to enhance osteogenesis and angiogenesis. Sol-gel processing methods are popular in generating these materials because it offers: 1) mild processing conditions; 2) easily controlled structure and composition; 3) the ability to incorporate biological molecules; and 4) inherent biocompatibility. The goal of this work was to develop a bioactive vaporization system for the deposition of silica sol-gel particles as a means to modify the material properties of a substrate at the nano- and micro- level to better mimic the instructive conditions of native bone tissue, promoting appropriate osteoblast attachment, proliferation, and differentiation as a means for supporting bone tissue regeneration. The size distribution, morphology and degradation behavior of the vapor deposited sol-gel particles developed here were found to be dependent upon formulation (H2O:TMOS, pH, Ca/P incorporation) and manufacturing (substrate surface character, deposition time). Additionally, deposition of these particles onto substrates can be used to modify overall substrate properties including hydrophobicity, roughness, and topography. Deposition of Ca/P sol particles induced apatite-like mineral formation on both two- and three-dimensional materials when exposed to body fluids. Gene expression analysis suggests that Ca/P sol particles induce upregulation osteoblast gene expression (Runx2, OPN, OCN) in preosteoblasts during early culture time points. Upon further modification-specifically increasing particle stability-these Ca/P sol particles possess the potential to serve as a simple and unique means to modify biomaterial surface properties as a means to direct osteoblast differentiation.
Resumo:
Background: Monitoring alcohol use is important in numerous situations. Direct ethanol metabolites, such as ethyl glucuronide (EtG), have been shown to be useful tools in detecting alcohol use and documenting abstinence. For very frequent or continuous control of abstinence, they lack practicability. Therefore, devices measuring ethanol itself might be of interest. This pilot study aims at elucidating the usability and accuracy of the cellular photo digital breathalyzer (CPDB) compared to self-reports in a naturalistic setting. Method: 12 social drinkers were included. Subjects used a CPDB 4 times daily, kept diaries of alcohol use and submitted urine for EtG testing over a period of 5 weeks. Results: In total, the 12 subjects reported 84 drinking episodes. 1,609 breath tests were performed and 55 urine EtG tests were collected. Of 84 drinking episodes, CPDB detected 98.8%. The compliance rate for breath testing was 96%. Of the 55 EtG tests submitted, 1 (1.8%) was positive. Conclusions: The data suggest that the CPDB device holds promise in detecting high, moderate, and low alcohol intake. It seems to have advantages compared to biomarkers and other Monitoring devices. The preference for CPDB by the participants might explain the high compliance. Further studies including comparison with biomarkers and transdermal devices are needed.
Resumo:
Shaped by factors like global outreach and immediacy, particularly the internet represents the multi-layered nature of contemporary globalization (cf Held et al. 2002). How have digital newspapers, social media and other internet platforms altered the situation of smaller music microcultues, especially in regions that have been on the fringes of global networks? This paper analyses the situation of the Latvian postfolklore band Ilgi between 2001 and 2008. Focusing on the group’s label UPE, the paper highlights how the internet became a significant means of existence during this specific period. Having established a local niche with a sound studio and CD shops, UPE combined this physical basis with outreach strategies, such as marketing and direct internet sales, which guaranteed the survival of the independent label. This strategy was also taken up by the band itself who started to develop a strong presence on social media like MySpace. At the same time, Ilgi has been using the internet as a central means of communicating with diasporic communities in the U.S. and Canada – hereby creating structures that were described as « intercultures » by Slobin (1993). This indicates that the local-global dichotomy can no longer be sufficiently addressed by a horizontal or vertical two-dimensional perception. Falling also back on the fieldwork experiences gained in Latvia, the paper finally addresses the question of how internet representation relates to the actual local situation – and how this has been altering the fieldwork perception. With regard to this situation – how useful are the approaches that have been developed within the context of « Media Anthropology » that investigates mass media items as multi-layered, densified symbolic objects?
Resumo:
A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb(-1) of root s = 7 TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results.
Resumo:
Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at root S-NN = 2.76 TeV corresponding to an integrated luminosity of approximately 7 mu b(-1), ATLAS has measured jets with a calorimeter system over the pseudorapidity interval vertical bar eta vertical bar < 2.1 and over the transverse momentum range 38 < pT <210 GeV. Jets were reconstructed using the anti-k(t) algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," R-CP. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. R-CP varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.
Resumo:
A search is presented for direct chargino production based on a disappearing-track signature using 20.3 fb−1 of proton-proton collisions at s√=8 TeV collected with the ATLAS experiment at the LHC. In anomaly-mediated supersymmetry breaking (AMSB) models, the lightest chargino is nearly mass degenerate with the lightest neutralino and its lifetime is long enough to be detected in the tracking detectors by identifying decays that result in tracks with no associated hits in the outer region of the tracking system. Some models with supersymmetry also predict charginos with a significant lifetime. This analysis attains sensitivity for charginos with a lifetime between 0.1 and 10 ns, and significantly surpasses the reach of the LEP experiments. No significant excess above the background expectation is observed for candidate tracks with large transverse momentum, and constraints on chargino properties are obtained. In the AMSB scenarios, a chargino mass below 270 GeV is excluded at 95% confidence level.
Resumo:
Results of a search for supersymmetry via direct production of third-generation squarks are reported, using 20.3 fb −1 of proton-proton collision data at √s =8 TeV recorded by the ATLAS experiment at the LHC in 2012. Two different analysis strategies based on monojetlike and c -tagged event selections are carried out to optimize the sensitivity for direct top squark-pair production in the decay channel to a charm quark and the lightest neutralino (t 1 →c+χ ˜ 0 1 ) across the top squark–neutralino mass parameter space. No excess above the Standard Model background expectation is observed. The results are interpreted in the context of direct pair production of top squarks and presented in terms of exclusion limits in the m ˜t 1, m ˜ X0 1 ) parameter space. A top squark of mass up to about 240 GeV is excluded at 95% confidence level for arbitrary neutralino masses, within the kinematic boundaries. Top squark masses up to 270 GeV are excluded for a neutralino mass of 200 GeV. In a scenario where the top squark and the lightest neutralino are nearly degenerate in mass, top squark masses up to 260 GeV are excluded. The results from the monojetlike analysis are also interpreted in terms of compressed scenarios for top squark-pair production in the decay channel t ˜ 1 →b+ff ′ +χ ˜ 0 1 and sbottom pair production with b ˜ 1 →b+χ ˜ 0 1 , leading to a similar exclusion for nearly mass-degenerate third-generation squarks and the lightest neutralino. The results in this paper significantly extend previous results at colliders.
Resumo:
The T2K experiment has reported the first observation of the appearance of electron neutrinos in a muon neutrino beam. The main and irreducible background to the appearance signal comes from the presence in the neutrino beam of a small intrinsic component of electron neutrinos originating from muon and kaon decays. In T2K, this component is expected to represent 1.2% of the total neutrino flux. A measurement of this component using the near detector (ND280), located 280 m from the target, is presented. The charged current interactions of electron neutrinos are selected by combining the particle identification capabilities of both the time projection chambers and electromagnetic calorimeters of ND280. The measured ratio between the observed electron neutrino beam component and the prediction is 1.01±0.10 providing a direct confirmation of the neutrino fluxes and neutrino cross section modeling used for T2K neutrino oscillation analyses. Electron neutrinos coming from muons and kaons decay are also separately measured, resulting in a ratio with respect to the prediction of 0.68±0.30 and 1.10±0.14 , respectively.
Resumo:
OBJECTIVES The aim of this prospective cohort trial was to perform a cost/time analysis for implant-supported single-unit reconstructions in the digital workflow compared to the conventional pathway. MATERIALS AND METHODS A total of 20 patients were included for rehabilitation with 2 × 20 implant crowns in a crossover study design and treated consecutively each with customized titanium abutments plus CAD/CAM-zirconia-suprastructures (test: digital) and with standardized titanium abutments plus PFM-crowns (control conventional). Starting with prosthetic treatment, analysis was estimated for clinical and laboratory work steps including measure of costs in Swiss Francs (CHF), productivity rates and cost minimization for first-line therapy. Statistical calculations were performed with Wilcoxon signed-rank test. RESULTS Both protocols worked successfully for all test and control reconstructions. Direct treatment costs were significantly lower for the digital workflow 1815.35 CHF compared to the conventional pathway 2119.65 CHF [P = 0.0004]. For subprocess evaluation, total laboratory costs were calculated as 941.95 CHF for the test group and 1245.65 CHF for the control group, respectively [P = 0.003]. The clinical dental productivity rate amounted to 29.64 CHF/min (digital) and 24.37 CHF/min (conventional) [P = 0.002]. Overall, cost minimization analysis exhibited an 18% cost reduction within the digital process. CONCLUSION The digital workflow was more efficient than the established conventional pathway for implant-supported crowns in this investigation.