997 resultados para integrity verification technique
Resumo:
The applicability of BET model for calculation of surface area of activated carbons is checked by using molecular simulations. By calculation of geometric surface areas for the simple model carbon slit-like pore with the increasing width, and by comparison of the obtained values with those for the same systems from the VEGA ZZ package (adsorbate-accessible molecular surface), it is shown that the latter methods provide correct values. For the system where a monolayer inside a pore is created the ASA approach (GCMC, Ar, T = 87 K) underestimates the value of surface area for micropores (especially, where only one layer is observed and/or two layers of adsorbed Ar are formed). Therefore, we propose the modification of this method based on searching the relationship between the pore diameter and the number of layers in a pore. Finally BET; original andmodified ASA; and A, B and C-point surface areas are calculated for a series of virtual porous carbons using simulated Ar adsorption isotherms (GCMC and T = 87 K). The comparison of results shows that the BET method underestimates and not, as it was usually postulated, overestimates the surface areas of microporous carbons.
Resumo:
The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.
Resumo:
We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.
Resumo:
This paper introduces the Hilbert Analysis (HA), which is a novel digital signal processing technique, for the investigation of tremor. The HA is formed by two complementary tools, i.e. the Empirical Mode Decomposition (EMD) and the Hilbert Spectrum (HS). In this work we show that the EMD can automatically detect and isolate tremulous and voluntary movements from experimental signals collected from 31 patients with different conditions. Our results also suggest that the tremor may be described by a new class of mathematical functions defined in the HA framework. In a further study, the HS was employed for visualization of the energy activities of signals. This tool introduces the concept of instantaneous frequency in the field of tremor. In addition, it could provide, in a time-frequency-energy plot, a clear visualization of local activities of tremor energy over the time. The HA demonstrated to be very useful to perform objective measurements of any kind of tremor and can therefore be used to perform functional assessment.
Resumo:
This paper introduces the Hilbert Analysis (HA), which is a novel digital signal processing technique, for the investigation of tremor. The HA is formed by two complementary tools, i.e. the Empirical Mode Decomposition (EMD) and the Hilbert Spectrum (HS). In this work we show that the EMD can automatically detect and isolate tremulous and voluntary movements from experimental signals collected from 31 patients with different conditions. Our results also suggest that the tremor may be described by a new class of mathematical functions defined in the HA framework. In a further study, the HS was employed for visualization of the energy activities of signals. This tool introduces the concept of instantaneous frequency in the field of tremor. In addition, it could provide, in a time-frequency energy plot, a clear visualization of local activities of tremor energy over the time. The HA demonstrated to be very useful to perform objective measurements of any kind of tremor and can therefore be used to perform functional assessment.
Resumo:
Active learning plays a strong role in mathematics and statistics, and formative problems are vital for developing key problem-solving skills. To keep students engaged and help them master the fundamentals before challenging themselves further, we have developed a system for delivering problems tailored to a student‟s current level of understanding. Specifically, by adapting simple methodology from clinical trials, a framework for delivering existing problems and other illustrative material has been developed, making use of macros in Excel. The problems are assigned a level of difficulty (a „dose‟), and problems are presented to the student in an order depending on their ability, i.e. based on their performance so far on other problems. We demonstrate and discuss the application of the approach with formative examples developed for a first year course on plane coordinate geometry, and also for problems centred on the topic of chi-square tests.
Resumo:
Dhaka cheese is a semihard artisanal variety made mainly from bovine milk, using very simple pressing methods. Experimental cheeses were pressed at gauge pressures up to 31 kPa for 12 h at 24 °C and 70% RH. These cheeses were subsequently examined for their compositional, textural and rheological properties plus their microstructures investigated by confocal laser microscopy. The cheese pressed at 15.6 kPa was found to have the best compositional and structural properties.
Resumo:
It has been postulated that the R- and S-equol enantiomers have different biological properties given their different binding affinities for the estrogen receptor. S-(-)equol is produced via the bacterial conversion of the soy isoflavone daidzein in the gut. We have compared the biological effects of purified S-equol to that of racemic (R and S) equol on breast and prostate cancer cells of varying receptor status in vitro. Both racemic and S-equol inhibited the growth of the breast cancer cell line MDA-MB-231 (> or = 10 microM) and the prostate cancer cell lines LNCaP (> or = 5 microM) and LAPC-4 (> or = 2.5 microM). The compounds also showed equipotent effects in inhibiting the invasion of MDA-MB-231 and PC-3 cancer cells through matrigel. S-equol (1, 10, 30 microM) was unable to prevent DNA damage in MCF-7 or MCF-10A breast cells following exposure to 2-hydroxy-4-nonenal, menadione, or benzo(a)pyrene-7,8-dihydrodiol-9,10-epoxide. In contrast, racemic equol (10, 30 microM) prevented DNA damage in MCF-10A cells following exposure to 2-hydroxy-4-nonenal or menadione. These findings suggest that racemic equol has strong antigenotoxic activity in contrast to the purified S-equol enantiomer implicating the R-, rather than the S-enantiomer as being responsible for the antioxidant effects of equol, a finding that may have implications for the in vivo chemoprotective properties of equol.
Resumo:
Flow along rivers, an integral part of many cities, might provide a key mechanism for ventilation – which is important for air quality and heat stress. Since the flow varies in space and time around rivers, there is limited utility in point measurements. Ground-based remote sensing offers the opportunity to study 3D flow in locations which are hard to observe. For three months in the winter and spring of 2011, the atmospheric flow above the River Thames in central London was observed using a scanning Doppler lidar, a dual-beam scintillometer and sonic anemometry. First, an inter-comparison showed that lidar-derived mean wind-speed estimates compare almost as well to sonic anemometers (root-mean-square error (rmse) 0.65–0.68 m s–1) as comparisons between sonic anemometers (0.35–0.73 m s–1). Second, the lidar duo-beam scanning strategy provided horizontal transects of wind vectors comparison with scintillometer rmse 1.12–1.63 m s–1) which revealed mean and turbulent flow across the river and surrounds; in particular: chanelling flow along the river and turbulence changes consistent with the roughness changes between built to river environments. The results have important consequences for air quality and dispersion around urban rivers, especially given that many cities have high traffic rates on bankside roads.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
The application of forecast ensembles to probabilistic weather prediction has spurred considerable interest in their evaluation. Such ensembles are commonly interpreted as Monte Carlo ensembles meaning that the ensemble members are perceived as random draws from a distribution. Under this interpretation, a reasonable property to ask for is statistical consistency, which demands that the ensemble members and the verification behave like draws from the same distribution. A widely used technique to assess statistical consistency of a historical dataset is the rank histogram, which uses as a criterion the number of times that the verification falls between pairs of members of the ordered ensemble. Ensemble evaluation is rendered more specific by stratification, which means that ensembles that satisfy a certain condition (e.g., a certain meteorological regime) are evaluated separately. Fundamental relationships between Monte Carlo ensembles, their rank histograms, and random sampling from the probability simplex according to the Dirichlet distribution are pointed out. Furthermore, the possible benefits and complications of ensemble stratification are discussed. The main conclusion is that a stratified Monte Carlo ensemble might appear inconsistent with the verification even though the original (unstratified) ensemble is consistent. The apparent inconsistency is merely a result of stratification. Stratified rank histograms are thus not necessarily flat. This result is demonstrated by perfect ensemble simulations and supplemented by mathematical arguments. Possible methods to avoid or remove artifacts that stratification induces in the rank histogram are suggested.
Resumo:
We present a new technique for correcting errors in radar estimates of rainfall due to attenuation which is based on the fact that any attenuating target will itself emit, and that this emission can be detected by the increased noise level in the radar receiver. The technique is being installed on the UK operational network, and for the first time, allows radome attenuation to be monitored using the increased noise at the higher beam elevations. This attenuation has a large azimuthal dependence but for an old radome can be up to 4 dB for rainfall rates of just 2–4 mm/h. This effect has been neglected in the past, but may be responsible for significant errors in rainfall estimates and in radar calibrations using gauges. The extra noise at low radar elevations provides an estimate of the total path integrated attenuation of nearby storms; this total attenuation can then be used as a constraint for gate-by-gate or polarimetric correction algorithms.