950 resultados para approximate analytical optical transfer function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze five high-resolution time series spanning the last 1.65 m.y.: benthic foraminiferal delta18O and delta13O, percent CaCO3, and estimated sea surface temperature (SST) at North Atlantic Deep Sea Drilling Project site 607 and percent CaCO3 at site 609. Each record is a multicore composite verified for continuity by splicing among multiple holes. These climatic indices portray changes in northern hemisphere ice sheet size and in North Atlantic surface and deep circulation. By tuning obliquity and precession components in the delta18O record to orbital variations, we have devised a time scale (TP607) for the entire Pleistocene that agrees in age with all K/Ar-dated magnetic reversals to within 1.5%. The Brunhes time scale is taken from Imbrie et al. [1984], except for differences near the stage 17/16 transition (0.70 to 0.64 Ma). All indicators show a similar evolution from the Matuyama to the Brunhes chrons: orbital eccentricity and precession responses increased in amplitude; those at orbital obliquity decreased. The change in dominance from obliquity to eccentricity occurred over several hundred thousand years, with fastest changes around 0.7 to 0.6 Ma. The coherent, in-phase responses of delta18O, delta13O, CaCO3 and SST at these rhythms indicate that northern hemisphere ice volume changes have controlled most of the North Atlantic surface-ocean and deep-ocean responses for the last 1.6 m.y. The delta13O, percent CaCO3, and SST records at site 607 also show prominent changes at low frequencies, including a prominent long-wavelength oscillation toward glacial conditions that is centered between 0.9 and 0.6 Ma. These changes appear to be associated neither with orbital forcing nor with changes in ice volume.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A submillennial resolution, radiolarian-based record of summer sea surface temperature (SST) documents the last five glacial to interglacial transitions at the subtropical front, southern Atlantic Ocean. Rapid fluctuations occur both during glacial and interglacial intervals, and sudden cooling episodes at glacial terminations are recurrent. Surface hydrography and global ice volume proxies from the same core suggest that summer SST increases prior to terminations lead global ice-volume decreases by 4.7 ± 3.7 ka (in the eccentricity band), 6.9 ± 2.5 ka (obliquity), and 2.7 ± 0.9 ka (precession). A comparison between SST and benthic delta13C suggests a decoupling in the response of northern subantarctic surface, intermediate, and deep water masses to cold events in the North Atlantic. The matching features between our SST record and the one from core MD97-2120 (southwest Pacific) suggests that the super-regional expression of climatic events is substantially affected by a single climatic agent: the Subtropical Front, amplifier and vehicle for the transfer of climatic change. The direct correlation between warmer DeltaTsite at Vostok and warmer SST at ODP Site 1089 suggests that warmer oceanic/atmospheric conditions imply a more southward placed frontal system, weaker gradients, and therefore stronger Agulhas input to the Atlantic Ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The late Neogene was a time of cryosphere development in the northern hemisphere. The present study was carried out to estimate the sea surface temperature (SST) change during this period based on the quantitative planktonic foraminiferal data of 8 DSDP sites in the western Pacific. Target factor analysis has been applied to the conventional transfer function approach to overcome the no-analog conditions caused by evolutionary faunal changes. By applying this technique through a combination of time-slice and time-series studies, the SST history of the last 5.3 Ma has been reconstructed for the low latitude western Pacific. Although the present data set is close to the statistical limits of factor analysis, the clear presence of sensible variations in individual SST time-series suggests the feasibility and reliability of this method in paleoceanographic studies. The estimated SST curves display the general trend of the temperature fluctuations and reveal three major cool periods in the late Neogene, i.e. the early Pliocene (4.7 3.5 Ma), the late Pliocene (3.1-2.7 Ma), and the latest Pliocene to early Pleistocene (2.2-1.0 Ma). Cool events are reflected in the increase of seasonality and meridional SST gradient in the subtropical area. The latest Pliocene to early Pleistocene cooling is most important in the late Neogene climatic evolution. It differs from the previous cool events in its irreversible, steplike change in SST, which established the glacial climate characteristic of the late Pleistocene. The winter and summer SST decreased by 3.3-5.4°C and 1.0 2.1C in the subtropics, by 0.9°C and 0.6C in the equatorial region, and showed little or no cooling in the tropics. Moreover, this cooling event occurred as a gradual SST decrease during 2.2 1.0 Ma at the warmer subtropical sites, while that at cooler subtropical site was an abrupt SST drop at 2.2 Ma. In contrast, equatorial and tropical western Pacific experienced only minor SST change in the entire late Neogene. In general, subtropics was much more sensitive to climatic forcing than tropics and the cooling events were most extensive in the cooler subtropics. The early Pliocene cool periods can be correlated to the Antarctic ice volume fluctuation, and the latest Pliocene early Pleistocene cooling reflects the climatic evolution during the cryosphere development of the northern hemisphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mammography equipment must be evaluated to ensure that images will be of acceptable diagnostic quality with lowest radiation dose. Quality Assurance (QA) aims to provide systematic and constant improvement through a feedback mechanism to address the technical, clinical and training aspects. Quality Control (QC), in relation to mammography equipment, comprises a series of tests to determine equipment performance characteristics. The introduction of digital technologies promoted changes in QC tests and protocols and there are some tests that are specific for each manufacturer. Within each country specifi c QC tests should be compliant with regulatory requirements and guidance. Ideally, one mammography practitioner should take overarching responsibility for QC within a service, with all practitioners having responsibility for actual QC testing. All QC results must be documented to facilitate troubleshooting, internal audit and external assessment. Generally speaking, the practitioner’s role includes performing, interpreting and recording the QC tests as well as reporting any out of action limits to their service lead. They must undertake additional continuous professional development to maintain their QC competencies. They are usually supported by technicians and medical physicists; in some countries the latter are mandatory. Technicians and/or medical physicists often perform many of the tests indicated within this chapter. It is important to recognise that this chapter is an attempt to encompass the main tests performed within European countries. Specific tests related to the service that you work within must be familiarised with and adhered too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work considered the micro-mechanical behavior of a long fiber embedded in an infinite matrix. Using the theory of elasticity, the idea of boundary layer and some simplifying assumptions, an approximate analytical solution was obtained for the normal and shear stresses along the fiber. The analytical solution to the problem was found for the case when the length of the embedded fiber is much greater than its radius, and the Young's modulus of the matrix was much less than that of the fiber. The analytical solution was then compared with a numerical solution based on Finite Element Analysis (FEA) using ANSYS. The numerical results showed the same qualitative behavior of the analytical solution, serving as a validation tool against lack of experimental results. In general this work provides a simple method to determine the thermal stresses along the fiber embedded in a matrix, which is the foundation for a better understanding of the interaction between the fiber and matrix in the case of the classical problem of thermal-stresses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I simulatori di guida sono strumenti altamente tecnologici che permettono di svolgere attività di ricerca in vari ambiti quali la psicologia, la medicina e l’ingegneria. Tuttavia, affinché i dati ottenuti mediante le simulazioni siano rapportabili alla loro controparte reale, la fedeltà delle componenti del simulatore di guida deve essere elevata. Questo lavoro tratta del miglioramento del sistema di restituzione del movimento nel simulatore a due gradi di libertà (2DOF) SIMU-LACET Driving Simulator, costruito e sviluppato presso il laboratorio LEPSIS dell’IFSTTAR (Istituto Francese delle Scienze e Tecnologie dei Trasporti, dello Sviluppo e delle Reti), in particolare nella sua sede di Parigi – Marne-la-Vallée. Si è deciso di andare a riprogettare la parte software del sistema di restituzione del movimento (motion cueing), operando su due elementi principali: lo scale factor (fattore di scala) applicato agli impulsi dinamici provenienti dal modello veicolare e i Motion Cueing Algorihms (MCA, algoritmi di restituzione del movimento), questo per entrambi i gradi di libertà. Si è quindi intervenuti sul modello esistente implementato in MATLAB-Simulink nello specifico blocco del motion cueing sul surge (traslazione longitudinale) e sul yaw (imbardata). Riguardo lo scale factor, è stata introdotta una metodologia per creare uno scale factor non lineare in forma esponenziale, tale da migliorare la restituzione degli impulsi meno ampi, pur rispettando i limiti fisici della piattaforma di movimento. Per quanto concerne il MCA, si sono vagliate diverse transfer function dell’algoritmo classico. La scelta finale dei MCA e la validazione del motion cueig in genere è stata effettuata mediante due esperimenti ed il giudizio dei soggetti che vi hanno partecipato. Inoltre, in virtù dei risultati del primo esperimento, si è investigata l’influenza che la strategia in merito al cambio delle marce avesse sulla percezione del movimento da parte del guidatore.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Objectives. To evaluate the influence of different tertiary amines on degree of conversion (DC), shrinkage-strain, shrinkage-strain rate, Knoop microhardness, and color and transmittance stabilities of experimental resins containing BisGMA/TEGDMA (3: 1 wt), 0.25wt% camphorquinone, 1wt% amine (DMAEMA, CEMA, DMPT, DEPT or DABE). Different light-curing protocols were also evaluated. Methods. DC was evaluated with FTIR-ATR and shrinkage-strain with the bonded-disk method. Shrinkage-strain-rate data were obtained from numerical differentiation of shrinkage-strain data with respect to time. Color stability and transmittance were evaluated after different periods of artificial aging, according to ISO 7491: 2000. Results were evaluated with ANOVA, Tukey, and Dunnett`s T3 tests (alpha = 0.05). Results. Studied properties were influenced by amines. DC and shrinkage-strain were maximum at the sequence: CQ < DEPT < DMPT <= CEMA approximate to DABE < DMAEMA. Both DC and shrinkage were also influenced by the curing protocol, with positive correlations between DC and shrinkage-strain and DC and shrinkage-strain rate. Materials generally decreased in L* and increased in b*. The strong exception was the resin containing DMAEMA that did not show dark and yellow shifts. Color varied in the sequence: DMAEMA < DEPT < DMPT < CEMA < DABE. Transmittance varied in the sequence: DEPT approximate to DABE < DABE approximate to DMPT approximate to CEMA < DMPT approximate to CEMA approximate to DMAEMA, being more evident at the wavelength of 400 nm. No correlations between DC and optical properties were observed. Significance. The resin containing DMAEMA showed higher DC, shrinkage-strain, shrinkage-strain rate, and microhardness, in addition to better optical properties. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The analytical solution to the one-dimensional absorption–conduction heat transfer problem inside a single glass pane is presented, which correctly takes into account all the relevant physical phenomena: the appearance of multiple reflections, the spectral distribution of solar radiation, the spectral dependence of optical properties, the presence of possible coatings, the non-uniform nature of radiation absorption, and the diffusion of heat by conduction across the glass pane. Additionally to the well established and known direct absorptance αe, the derived solution introduces a new spectral quantity called direct absorptance moment βe, that indicates where in the glass pane is the absorption of radiation actually taking place. The theoretical and numerical comparison of the derived solution with existing approximate thermal models for the absorption–conduction problem reveals that the latter ones work best for low-absorbing uncoated single glass panes, something not necessarily fulfilled by modern glazings.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper we examine the effect of contact angle (or surface wettability) on the convective heat transfer coefficient in microchannels. Slip flow, where the fluid velocity at the wall is non-zero, is most likely to occur in microchannels due to its dependence on shear rate or wall shear stress. We show analytically that for a constant pressure drop, the presence of slip increases the Nusselt number. In a microchannel heat exchanger we modified the surface wettability from a contact angle of 20 degrees-120 degrees using thin film coating technology. Apparent slip flow is implied from pressure and flow rate measurements with a departure from classical laminar friction coefficients above a critical shear rate of approximately 10,000 s(-1). The magnitude of this departure is dependant on the contact angle with higher contact angles surfaces exhibiting larger pressure drop decreases. Similarly, the non-dimensional heat flux is found to decrease relative to laminar non-slip theory, and this decrease is also a function of the contact angle. Depending on the contact angle and the wall shear rate, variations in the heat transfer rate exceeding 10% can be expected. Thus the contact angle is an important consideration in the design of micro, and even more so, nano heat exchangers. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Thermal characterizations of high power light emitting diodes (LEDs) and laser diodes (LDs) are one of the most critical issues to achieve optimal performance such as center wavelength, spectrum, power efficiency, and reliability. Unique electrical/optical/thermal characterizations are proposed to analyze the complex thermal issues of high power LEDs and LDs. First, an advanced inverse approach, based on the transient junction temperature behavior, is proposed and implemented to quantify the resistance of the die-attach thermal interface (DTI) in high power LEDs. A hybrid analytical/numerical model is utilized to determine an approximate transient junction temperature behavior, which is governed predominantly by the resistance of the DTI. Then, an accurate value of the resistance of the DTI is determined inversely from the experimental data over the predetermined transient time domain using numerical modeling. Secondly, the effect of junction temperature on heat dissipation of high power LEDs is investigated. The theoretical aspect of junction temperature dependency of two major parameters – the forward voltage and the radiant flux – on heat dissipation is reviewed. Actual measurements of the heat dissipation over a wide range of junction temperatures are followed to quantify the effect of the parameters using commercially available LEDs. An empirical model of heat dissipation is proposed for applications in practice. Finally, a hybrid experimental/numerical method is proposed to predict the junction temperature distribution of a high power LD bar. A commercial water-cooled LD bar is used to present the proposed method. A unique experimental setup is developed and implemented to measure the average junction temperatures of the LD bar. After measuring the heat dissipation of the LD bar, the effective heat transfer coefficient of the cooling system is determined inversely. The characterized properties are used to predict the junction temperature distribution over the LD bar under high operating currents. The results are presented in conjunction with the wall-plug efficiency and the center wavelength shift.