977 resultados para exponential sums
Resumo:
Systematic experiments have been carried out on the thermal and rheological behaviour of the ionic liquid, 1-butyl-3-methylimidazolium bis{(trifluoromethyl)sulfonyl} imide, [C(4)mim][NTf2], and, for the first time, on the forced convective heat transfer of an ionic liquid under the laminar flow conditions. The results show that the thermal conductivity of the ionic liquid is similar to 0.13 W m(-1) K-1, which is almost independent of temperature between 25 and 40 degrees C. Rheological measurements show that the [C(4)mim][NTf2] liquid is a Newtonian fluid with its shear viscosity decreasing with increasing temperature according to the exponential law over a temperature range of 20-90 degrees C. The convective heat transfer experiments demonstrate that the thermal entrance length of the ionic liquid is very large due to its high viscosity and low thermal conductivity. The convective heat transfer coefficient is observed to be much lower than that of distilled water under the same conditions. The convective heat transfer data are also found to fit well to the convectional Shah's equation under the conditions of this work. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The construction industry is renowned for spending vast sums in the resolution of disputes, but never in the prevention. The purpose of this paper is to analyse the New Engineering Contract (NEC) to determine whether or not adjudication has become misaligned with the contract’s objective of promoting effective management. In doing so, the paper examines dispute review boards in order to ascertain if they could be a viable alternative to adjudication. A sequential mixed methodology is adopted including a detailed literature review, eight semi-structured interviews, culminating in the circulation and analysis of a questionnaire, to record the significance of the factors identified. The research concludes that the majority of individuals agree that dispute review boards would be more aligned with the NEC. The familiarity of members, the potential to curb rogue behaviour of parties and the proactive nature of the board are flagged as positive features, however the cost aspect requires further investigation. The reservations made in the study about adjudication, such as the priority given to speed over accuracy and also the adversarial nature of the process, suggest that a preventative step prior to proceeding to adjudication would coincide more with the three core themes of the NEC Contract and therefore, be a positive addition.
Resumo:
We present the one-year long observing campaign of SN 2012A which exploded in the nearby (9.8 Mpc) irregular galaxy NGC 3239. The photometric evolution is that of a normal type IIP supernova. The absolute maximum magnitude, with MB = -16.23 +- 0.16 mag. SN2012A reached a peak luminosity of about 2X10**42 erg/s, which is brighter than those of other SNe with a similar 56Ni mass. The latter was estimated from the luminosity in the exponential tail of the light curve and found to be M(56Ni) = 0.011 +-0.004 Msun. The spectral evolution of SN 2012A is also typical of SN IIP, from the early spectra dominated by a blue continuum and very broad (~10**4 km/s) Balmer lines, to the late-photospheric spectra characterized by prominent P-Cygni features of metal lines (Fe II, Sc II, Ba II, Ti II, Ca II, Na ID). The photospheric velocity is moderately low, ~3X10**3 km/s at 50 days, for the low optical depth metal lines. The nebular spectrum obtained 394 days after the shock breakout shows the typical features of SNe IIP and the strength of the [O I] doublet suggests a progenitor of intermediate mass, similar to SN 2004et (~15 Msun). A candidate progenitor for SN 2012A has been identified in deep, pre-explosion K'-band Gemini North (NIRI) images, and found to be consistent with a star with a bolometric magnitude -7.08+-0.36 (log L/Lsun = 4.73 +- 0.14$ dex). The magnitude of the recovered progenitor in archival images points toward a moderate-mass 10.5 (-2/+4.5) Msun star as the precursor of SN 2012A. The explosion parameters and progenitor mass were also estimated by means of a hydrodynamical model, fitting the bolometric light curve, the velocity and the temperature evolution. We found a best fit for a kinetic energy of 0.48 foe, an initial radius of 1.8X10**13 cm and ejecta mass of 12.5 Msun.
Resumo:
We investigate the violation of local realism in Bell tests involving homodyne measurements performed on multimode continuous-variable states. By binning the measurement outcomes in an appropriate way, we prove that the Mermin-Klyshko inequality can be violated by an amount that grows exponentially with the number of modes. Furthermore, the maximum violation allowed by quantum mechanics can be attained for any number of modes, albeit requiring a quantum state whose generation is hardly practicable. Interestingly, this exponential increase of the violation holds true even for simpler states, such as multipartite GHZ states. The resulting benefit of using more modes is shown to be significant in practical multipartite Bell tests by analyzing the increase of the robustness to noise with the number of modes. In view of the high efficiency achievable with homodyne detection, our results thus open a possible way to feasible loophole-free Bell tests that are robust to experimental imperfections. We provide an explicit example of a three-mode state (a superposition of coherent states) which results in a significantly high violation of the Mermin-Klyshko inequality (around 10%) with homodyne measurements.
Resumo:
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.
Resumo:
The relationship between retention loss in single crystal PbTiO3 ferroelectric thin films and leakage currents is demonstrated by piezoresponse and conductive atomic force microscopy measurements. It was found that the polarization reversal in the absence of an electric field followed a stretched exponential behavior 1-exp[-(t/k)(d)] with exponent d>1, which is distinct from a dispersive random walk process with d <. The latter has been observed in polycrystalline films for which retention loss was associated with grain boundaries. The leakage current indicates power law scaling at short length scales, which strongly depends on the applied electric field. Additional information of the microstructure, which contributes to an explanation of the presence of leakage currents, is presented with high resolution transmission electron microscopy analysis.
Resumo:
Mathematical modelling has become an essential tool in the design of modern catalytic systems. Emissions legislation is becoming increasingly stringent, and so mathematical models of aftertreatment systems must become more accurate in order to provide confidence that a catalyst will convert pollutants over the required range of conditions.
Automotive catalytic converter models contain several sub-models that represent processes such as mass and heat transfer, and the rates at which the reactions proceed on the surface of the precious metal. Of these sub-models, the prediction of the surface reaction rates is by far the most challenging due to the complexity of the reaction system and the large number of gas species involved. The reaction rate sub-model uses global reaction kinetics to describe the surface reaction rate of the gas species and is based on the Langmuir Hinshelwood equation further developed by Voltz et al. [1] The reactions can be modelled using the pre-exponential and activation energies of the Arrhenius equations and the inhibition terms.
The reaction kinetic parameters of aftertreatment models are found from experimental data, where a measured light-off curve is compared against a predicted curve produced by a mathematical model. The kinetic parameters are usually manually tuned to minimize the error between the measured and predicted data. This process is most commonly long, laborious and prone to misinterpretation due to the large number of parameters and the risk of multiple sets of parameters giving acceptable fits. Moreover, the number of coefficients increases greatly with the number of reactions. Therefore, with the growing number of reactions, the task of manually tuning the coefficients is becoming increasingly challenging.
In the presented work, the authors have developed and implemented a multi-objective genetic algorithm to automatically optimize reaction parameters in AxiSuite®, [2] a commercial aftertreatment model. The genetic algorithm was developed and expanded from the code presented by Michalewicz et al. [3] and was linked to AxiSuite using the Simulink add-on for Matlab.
The default kinetic values stored within the AxiSuite model were used to generate a series of light-off curves under rich conditions for a number of gas species, including CO, NO, C3H8 and C3H6. These light-off curves were used to generate an objective function.
This objective function was used to generate a measure of fit for the kinetic parameters. The multi-objective genetic algorithm was subsequently used to search between specified limits to attempt to match the objective function. In total the pre-exponential factors and activation energies of ten reactions were simultaneously optimized.
The results reported here demonstrate that, given accurate experimental data, the optimization algorithm is successful and robust in defining the correct kinetic parameters of a global kinetic model describing aftertreatment processes.
Resumo:
A prism coupling arrangement is used to excite surface plasmons at the surface of a thin silver aim and a photon scanning tunnelling microscope is used to detect the evanescent field above the silver surface. Excitation of the silver/ air mode of interest is performed at lambda(1) = 632 . 8 nm using a tightly focused beam, while the control of the tip is effected by exciting a counter-propagating surface plasmon field at a different wavelength. lambda(2) = 543 . 5 nm, using an unfocused beam covering a macroscopic area. Propagation of the red surface plasmon is evidenced by an exponential tail extending away from the launch site, but this feature is abruptly truncated if the surface plasmon encounters the edge of the silver film - there is no specularly reflected 'beam'. Importantly, the radiative decay of the surface mode at the film edge is observable only at larger tip-sample separations, emphasizing the importance of accessing the mesoscopic regime.
Resumo:
The spectroscopic capability of the photon scanning tunneling microscope is exploited to study directly the launch and propagation of surface plasmons on thin silver films. Two input beams, of different wavelength, are incident through the prism in a prism-Ag film-air-fibre tip system. Both excite surface plasmons at the Ag-air interface and light of both wavelengths is coupled into the fibre probe via the respective surface plasmon evanescent fields. One laser beam is used for instrument control. The second, or probe beam is tightly focused on the sample, within the area of the unfocused or control beam, giving a well-defined and symmetrical, confined surface plasmon launch site. However, the image at the probe wavelength is highly asymmetrical in section with an exponential tail extending beyond one side of the launch site. This demonstrates in a very direct fashion;the propagation of surface plasmons; a propagation length of similar to 11.7 mu m is measured at a probe wavelength of 543.5 nm. On rough Ag films the excitation of localised scattering centres is also observed in addition to the launch of delocalised surface plasmons.
Resumo:
We undertake a detailed study of the sets of multiplicity in a second countable locally compact group G and their operator versions. We establish a symbolic calculus for normal completely bounded maps from the space B(L-2(G)) of bounded linear operators on L-2 (G) into the von Neumann algebra VN(G) of G and use it to show that a closed subset E subset of G is a set of multiplicity if and only if the set E* = {(s,t) is an element of G x G : ts(-1) is an element of E} is a set of operator multiplicity. Analogous results are established for M-1-sets and M-0-sets. We show that the property of being a set of multiplicity is preserved under various operations, including taking direct products, and establish an Inverse Image Theorem for such sets. We characterise the sets of finite width that are also sets of operator multiplicity, and show that every compact operator supported on a set of finite width can be approximated by sums of rank one operators supported on the same set. We show that, if G satisfies a mild approximation condition, pointwise multiplication by a given measurable function psi : G -> C defines a closable multiplier on the reduced C*-algebra G(r)*(G) of G if and only if Schur multiplication by the function N(psi): G x G -> C, given by N(psi)(s, t) = psi(ts(-1)), is a closable operator when viewed as a densely defined linear map on the space of compact operators on L-2(G). Similar results are obtained for multipliers on VN(C).
Resumo:
We use ground-based images of high spatial and temporal resolution to search for evidence of nanoflare activity in the solar chromosphere. Through close examination of more than 1 x 10(9) pixels in the immediate vicinity of an active region, we show that the distributions of observed intensity fluctuations have subtle asymmetries. A negative excess in the intensity fluctuations indicates that more pixels have fainter-than-average intensities compared with those that appear brighter than average. By employing Monte Carlo simulations, we reveal how the negative excess can be explained by a series of impulsive events, coupled with exponential decays, that are fractionally below the current resolving limits of low-noise equipment on high-resolution ground-based observatories. Importantly, our Monte Carlo simulations provide clear evidence that the intensity asymmetries cannot be explained by photon-counting statistics alone. A comparison to the coronal work of Terzo et al. suggests that nanoflare activity in the chromosphere is more readily occurring, with an impulsive event occurring every similar to 360 s in a 10,000 km(2) area of the chromosphere, some 50 times more events than a comparably sized region of the corona. As a result, nanoflare activity in the chromosphere is likely to play an important role in providing heat energy to this layer of the solar atmosphere.
Resumo:
We show that Kraus' property $ S_{\sigma }$ is preserved under taking weak* closed sums with masa-bimodules of finite width and establish an intersection formula for weak* closed spans of tensor products, one of whose terms is a masa-bimodule of finite width. We initiate the study of the question of when operator synthesis is preserved under the formation of products and prove that the union of finitely many sets of the form $ \kappa \times \lambda $, where $ \kappa $ is a set of finite width while $ \lambda $ is operator synthetic, is, under a necessary restriction on the sets $ \lambda $, again operator synthetic. We show that property $ S_{\sigma }$ is preserved under spatial Morita subordinance.
Resumo:
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Resumo:
We consider in this paper the family of exponential Lie groups Gn,µ, whose Lie algebra is an extension of the Heisenberg Lie algebra by the reals and whose quotient group by the centre of the Heisenberg group is an ax + b-like group. The C*-algebras of the groups Gn,µ give new examples of almost C0(K)-C*-algebras.
Resumo:
A parametric regression model for right-censored data with a log-linear median regression function and a transformation in both response and regression parts, named parametric Transform-Both-Sides (TBS) model, is presented. The TBS model has a parameter that handles data asymmetry while allowing various different distributions for the error, as long as they are unimodal symmetric distributions centered at zero. The discussion is focused on the estimation procedure with five important error distributions (normal, double-exponential, Student's t, Cauchy and logistic) and presents properties, associated functions (that is, survival and hazard functions) and estimation methods based on maximum likelihood and on the Bayesian paradigm. These procedures are implemented in TBSSurvival, an open-source fully documented R package. The use of the package is illustrated and the performance of the model is analyzed using both simulated and real data sets.