904 resultados para Explosive regimes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This presentation outlines the results from over twenty countries to describe and analyse for the first time in some depth the many different fundraising environments around the world that are shaped by different historical, cultural, social, religious, political and economic conditions. The data is organized into a new typology of fundraising regimes, which we argue strengthens understanding of the connection between asking and giving. In the light of the giving-centric nature of much research, we suggest our focus on ‘askers’ is a useful counterbalance, as giving and asking are so intimately related.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates the effects of contact pressure and geometry in rolling-contact wear tests by using discs with different radii of curvature to simulate the varying contact conditions that may be typically found in the field. The tests were conducted without any significant amount of traction, but micro slip was still observed due to contact deformation. Moreover, variation of contact pressure was observed due to contact patch elongation and diameter reduction. Rolling contact fatigue, adhesive and sliding wear were observed on the curved contact interface. The development of different wear regimes and material removal phenomena were analyzed using microscopic images in order to broaden the understanding of the wear mechanisms occurring in the rail-wheel contact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A miniaturized flow-through system consisting of a gold coated silicon substrate based on enhanced Raman spectroscopy has been used to study the detection of vapour from model explosive compounds. The measurements show that the detectability of the vapour molecules at room temperature depends sensitively on the interaction between the molecule and the substrate. The results highlight the capability of a flow system combined with Raman spectroscopy for detecting low vapour pressure compounds with a limit of detection of 0.2 ppb as demonstrated by the detection of bis(2-ethylhexyl)phthalate, a common polymer additive emitted from a commercial polyvinyl chloride (PVC) tubing at room temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here, we describe a metal-insulator-insulator nanofocusing structure formed by a high-permittivity dielectric wedge on a metal substrate. The structure is shown to produce nanofocusing of surface plasmon polaritons (SPPs) in the direction opposite to the taper of the wedge, including a range of nanoplasmonic effects such as nanofocusing of SPPs with negative refraction, formation of plasmonic caustics within a nanoscale distance from the wedge tip, mutual transformation of SPP modes, and significant local field enhancements in the adiabatic and strongly nonadiabatic regimes. A combination of approximate analytical and rigorous numerical approaches is used to analyze the strength and position of caustics in the structure. In particular, it is demonstrated that strong SPP localization within spatial regions as small as a few tens of nanometers near the caustic is achievable in the considered structures. Contrary to other nanofocusing configurations, efficient nanofocusing is shown to occur in the strongly nonadiabatic regime with taper angles of the dielectric wedge as large as ∼40° and within uniquely short distances (as small as a few dozens of nanometers) from the tip of the wedge. Physical interpretations of the obtained results are also presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new physically based classical continuous potential distribution model, particularly considering the channel center, is proposed for a short-channel undoped body symmetrical double-gate transistor. It involves a novel technique for solving the 2-D nonlinear Poisson's equation in a rectangular coordinate system, which makes the model valid from weak to strong inversion regimes and from the channel center to the surface. We demonstrated, using the proposed model, that the channel potential versus gate voltage characteristics for the devices having equal channel lengths but different thicknesses pass through a single common point (termed ``crossover point''). Based on the potential model, a new compact model for the subthreshold swing is formulated. It is shown that for the devices having very high short-channel effects (SCE), the effective subthreshold slope factor is mainly dictated by the potential close to the channel center rather than the surface. SCEs and drain-induced barrier lowering are also assessed using the proposed model and validated against a professional numerical device simulator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian Media Law details and explains the complex case law, legislation and regulations governing media practice in areas as diverse as journalism, advertising, multimedia and broadcasting. It examines the issues affecting traditional forms of media such as television, radio, film and newspapers as well as for recent forms such as the internet, online forums and digital technology, in a clear and accessible format. New additions to the fifth edition include: - the implications of new anti-terrorism legislation for journalists; - developments in privacy law, including Law Reform recommendations for a statutory cause of action to protect personal privacy in Australia and the expanding privacy jurisprudence in the United Kingdom and New Zealand; - liability for defamation of internet search engines and service providers; - the High Court decision in Roadshow v iiNet and the position of internet service providers in relation to copyright infringement via their services; - new suppression order regimes; - statutory reforms providing journalists with a rebuttable presumption of non-disclosure when called upon to reveal their sources in a court of law; - recent developments regarding whether journalists can use electronic devices to collect and disseminate information about court proceedings; - contempt committed by jurors via social media; and an examination of recent decisions on defamation, confidentiality, vilification, copyright and contempt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of a microstructure in 304L stainless steel during industrial hot-forming operations, including press forging (mean strain rate of 0.15 s(-1)), rolling/extrusion (2-5 s(-1)), and hammer forging (100 s(-1)) at different temperatures in the range 600-1200 degrees C, was studied with a view to validating the predictions of the processing map. The results have shown that excellent correlation exists between the regimes exhibited by the map and the product microstructures. 304L stainless steel exhibits instability bands when hammer forged at temperatures below 1100 degrees C, rolled/extruded below 1000 degrees C, or press forged below 800 degrees C. All of these conditions must be avoided in mechanical processing of the material. On the other hand, ideally, the material may be rolled, extruded, or press forged at 1200 degrees C to obtain a defect-free microstructure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The explosive growth in the development of Traditional Chinese Medicine (TCM) has resulted in the continued increase in clinical and research data. The lack of standardised terminology, flaws in data quality planning and management of TCM informatics are preventing clinical decision-making, drug discovery and education. This paper argues that the introduction of data warehousing technologies to enhance the effectiveness and durability in TCM is paramount. To showcase the role of data warehousing in the improvement of TCM, this paper presents a practical model for data warehousing with detailed explanation, which is based on the structured electronic records, for TCM clinical researches and medical knowledge discovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power dissipation maps have been generated in the temperature range of 900 degrees C to 1150 degrees C and strain rate range of 10(-3) to 10 s(-1) for a cast aluminide alloy Ti-24Al-20Nb using dynamic material model. The results define two distinct regimes of temperature and strain rate in which efficiency of power dissipation is maximum. The first region, centered around 975 degrees C/0.1 s(-1), is shown to correspond to dynamic recrystallization of the alpha(2) phase and the second, centered around 1150 degrees C/0.001 s(-1), corresponds to dynamic recovery and superplastic deformation of the beta phase. Thermal activation analysis using the power law creep equation yielded apparent activation energies of 854 and 627 kJ/mol for the first and second regimes, respectively. Reanalyzing the data by alternate methods yielded activation energies in the range of 170 to 220 kJ/mol and 220 to 270 kJ/mol for the first and second regimes, respectively. Cross slip was shown to constitute the activation barrier in both cases. Two distinct regimes of processing instability-one at high strain rates and the other at the low strain rates in the lower temperature regions-have been identified, within which shear bands are formed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computational study for the convergence acceleration of Euler and Navier-Stokes computations with upwind schemes has been conducted in a unified framework. It involves the flux-vector splitting algorithms due to Steger-Warming and Van Leer, the flux-difference splitting algorithms due to Roe and Osher and the hybrid algorithms, AUSM (Advection Upstream Splitting Method) and HUS (Hybrid Upwind Splitting). Implicit time integration with line Gauss-Seidel relaxation and multigrid are among the procedures which have been systematically investigated on an individual as well as cumulative basis. The upwind schemes have been tested in various implicit-explicit operator combinations such that the optimal among them can be determined based on extensive computations for two-dimensional flows in subsonic, transonic, supersonic and hypersonic flow regimes. In this study, the performance of these implicit time-integration procedures has been systematically compared with those corresponding to a multigrid accelerated explicit Runge-Kutta method. It has been demonstrated that a multigrid method employed in conjunction with an implicit time-integration scheme yields distinctly superior convergence as compared to those associated with either of the acceleration procedures provided that effective smoothers, which have been identified in this investigation, are prescribed in the implicit operator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data generated via user activity on social media platforms is routinely used for research across a wide range of social sciences and humanities disciplines. The availability of data through the Twitter APIs in particular has afforded new modes of research, including in media and communication studies; however, there are practical and political issues with gaining access to such data, and with the consequences of how that access is controlled. In their paper ‘Easy Data, Hard Data’, Burgess and Bruns (2015) discuss both the practical and political aspects of Twitter data as they relate to academic research, describing how communication research has been enabled, shaped and constrained by Twitter’s “regimes of access” to data, the politics of data use, and emerging economies of data exchange. This conceptual model, including the ‘easy data, hard data’ formulation, can also be applied to Sina Weibo. In this paper, we build on this model to explore the practical and political challenges and opportunities associated with the ‘regimes of access’ to Weibo data, and their consequences for digital media and communication studies. We argue that in the Chinese context, the politics of data access can be even more complicated than in the case of Twitter, which makes scientific research relying on large social data from this platform more challenging in some ways, but potentially richer and more rewarding in others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Miniaturization of analytical instrumentation is attracting growing interest in response to the explosive demand for rapid, yet sensitive analytical methods and low-cost, highly automated instruments for pharmaceutical and bioanalyses and environmental monitoring. Microfabrication technology in particular, has enabled fabrication of low-cost microdevices with a high degree of integrated functions, such as sample preparation, chemical reaction, separation, and detection, on a single microchip. These miniaturized total chemical analysis systems (microTAS or lab-on-a-chip) can also be arrayed for parallel analyses in order to accelerate the sample throughput. Other motivations include reduced sample consumption and waste production as well as increased speed of analysis. One of the most promising hyphenated techniques in analytical chemistry is the combination of a microfluidic separation chip and mass spectrometer (MS). In this work, the emerging polymer microfabrication techniques, ultraviolet lithography in particular, were exploited to develop a capillary electrophoresis (CE) separation chip which incorporates a monolithically integrated electrospray ionization (ESI) emitter for efficient coupling with MS. An epoxy photoresist SU-8 was adopted as structural material and characterized with respect to its physicochemical properties relevant to chip-based CE and ESI/MS, namely surface charge, surface interactions, heat transfer, and solvent compatibility. As a result, SU-8 was found to be a favorable material to substitute for the more commonly used glass and silicon in microfluidic applications. In addition, an infrared (IR) thermography was introduced as direct, non-intrusive method to examine the heat transfer and thermal gradients during microchip-CE. The IR data was validated through numerical modeling. The analytical performance of SU-8-based microchips was established for qualitative and quantitative CE-ESI/MS analysis of small drug compounds, peptides, and proteins. The CE separation efficiency was found to be similar to that of commercial glass microchips and conventional CE systems. Typical analysis times were only 30-90 s per sample indicating feasibility for high-throughput analysis. Moreover, a mass detection limit at the low-attomole level, as low as 10E+5 molecules, was achieved utilizing MS detection. The SU-8 microchips developed in this work could also be mass produced at low cost and with nearly identical performance from chip to chip. Until this work, the attempts to combine CE separation with ESI in a chip-based system, amenable to batch fabrication and capable of high, reproducible analytical performance, have not been successful. Thus, the CE-ESI chip developed in this work is a substantial step toward lab-on-a-chip technology.