967 resultados para Average method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The safety of food has become an increasingly interesting issue to consumers and the media. It has also become a source of concern, as the amount of information on the risks related to food safety continues to expand. Today, risk and safety are permanent elements within the concept of food quality. Safety, in particular, is the attribute that consumers find very difficult to assess. The literature in this study consists of three main themes: traceability; consumer behaviour related to both quality and safety issues and perception of risk; and valuation methods. The empirical scope of the study was restricted to beef, because the beef labelling system enables reliable tracing of the origin of beef, as well as attributes related to safety, environmental friendliness and animal welfare. The purpose of this study was to examine what kind of information flows are required to ensure quality and safety in the food chain for beef, and who should produce that information. Studying the willingness to pay of consumers makes it possible to determine whether the consumers consider the quantity of information available on the safety and quality of beef sufficient. One of the main findings of this study was that the majority of Finnish consumers (73%) regard increased quality information as beneficial. These benefits were assessed using the contingent valuation method. The results showed that those who were willing to pay for increased information on the quality and safety of beef would accept an average price increase of 24% per kilogram. The results showed that certain risk factors impact consumer willingness to pay. If the respondents considered genetic modification of food or foodborne zoonotic diseases as harmful or extremely harmful risk factors in food, they were more likely to be willing to pay for quality information. The results produced by the models thus confirmed the premise that certain food-related risks affect willingness to pay for beef quality information. The results also showed that safety-related quality cues are significant to the consumers. In the first place, the consumers would like to receive information on the control of zoonotic diseases that are contagious to humans. Similarly, other process-control related information ranked high among the top responses. Information on any potential genetic modification was also considered important, even though genetic modification was not regarded as a high risk factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New Zealand's Greenhouse Gas Inventory (the NZ Inventory) currently estimates methane (CH4) emissions from anaerobic dairy effluent ponds by: (1) determining the total pond volume across New Zealand; (2) dividing this volume by depth to obtain the total pond surface area; and (3) multiplying this area by an observational average CH4 flux. Unfortunately, a mathematically erroneous determination of pond volume has led to an imbalanced equation and a geometry error was made when scaling-up the observational CH4 flux. Furthermore, even if these errors are corrected, the nationwide estimate still hinges on field data from a study that used a debatable method to measure pond CH4 emissions at a single site, as well as a potentially inaccurate estimation of the amount of organic waste anaerobically treated. The development of a new methodology is therefore critically needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Masonry under compression is affected by the properties of its constituents and their interfaces. In spite of extensive investigations of the behaviour of masonry under compression, the information in the literature cannot be regarded as comprehensive due to ongoing inventions of new generation products – for example, polymer modified thin layer mortared masonry and drystack masonry. As comprehensive experimental studies are very expensive, an analytical model inspired by damage mechanics is developed and applied to the prediction of the compressive behaviour of masonry in this paper. The model incorporates a parabolic progressively softening stress-strain curve for the units and a progressively stiffening stress-strain curve until a threshold strain for the combined mortar and the unit-mortar interfaces is reached. The model simulates the mutual constraints imposed by each of these constituents through their respective tensile and compressive behaviour and volumetric changes. The advantage of the model is that it requires only the properties of the constituents and considers masonry as a continuum and computes the average properties of the composite masonry prisms/wallettes; it does not require discretisation of prism or wallette similar to the finite element methods. The capability of the model in capturing the phenomenological behaviour of masonry with appropriate elastic response, stiffness degradation and post peak softening is presented through numerical examples. The fitting of the experimental data to the model parameters is demonstrated through calibration of some selected test data on units and mortar from the literature; the calibrated model is shown to predict the responses of the experimentally determined masonry built using the corresponding units and mortar quite well. Through a series of sensitivity studies, the model is also shown to predict the masonry strength appropriately for changes to the properties of the units and mortar, the mortar joint thickness and the ratio of the height of unit to mortar joint thickness. The unit strength is shown to affect the masonry strength significantly. Although the mortar strength has only a marginal effect, reduction in mortar joint thickness is shown to have a profound effect on the masonry strength. The results obtained from the model are compared with the various provisions in the Australian Masonry Structures Standard AS3700 (2011) and Eurocode 6.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is recognised that patients with chronic disease are unable to remembercorrectly information provided by health care profesionals. The teach-back method is acknowledgedas a technique to improve patients’ understanding. Yet it is not used in nursing practice in Vietnam. Objectives This study sought to examine knowledge background of heart failure among cardiac nurses, introduce a education about heart failure self-management and the teach-back method to assist teaching patients on self-care. The study also wanted to explore if a short education could benefit nurses’ knowledge so they would be qualified to deliver education to patients. Methods A pre/post-test design was employed. Cardiac nurses from 3 hospitals (Vietnam National Heart Institute, E Hospital, Huu Nghi Hospital) were invited to attend a six-hour educational session which covered both the teach-back method and heart failure self-management. Role-play with scenarios were used to reinforce educational contents. The Dutch Heart Failure Knowledge Scale was used to assess nurses’ knowledge of heart failure at baseline and after the educational session. Results 20 nurses from3 selected hospitals participated. Average age was 34.5±7.9 years and years of nursing experience was 11.6±8.3. Heart failure knowledge score at the baseline was 12.7±1.2 and post education was 13.8±1.0. There was deficiency of nurses knowledge regarding fluid restriction among heart failure people, causes of worsening heart failure. Heart failure knowledge improved significantly following the workshop (p < 0.001). All nurses achieved an overall adequate knowledge score (≥11 of the maximum 15) at the end. 100% of nurses agreed that the teach-back method was effective and could be used to educate patients about heart failure self-management. Conclusions The results of this study have shown the effectiveness of the piloteducaiton in increasing nurses’ knowledge of heart failure. The teach-back method is accepted for Vietnamese nurses to use in routine cardiac practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated trained listener-based acoustic sampling as a reliable and non-invasive method for rapid assessment of ensiferan species diversity in tropical evergreen forests. This was done by evaluating the reliability of identification of species and numbers of calling individuals using psychoacoustic experiments in the laboratory and by comparing psychoacoustic sampling in the field with ambient noise recordings made at the same time. The reliability of correct species identification by the trained listener was 100% for 16 out of 20 species tested in the laboratory. The reliability of identifying the numbers of individuals correctly was 100% for 13 out of 20 species. The human listener performed slightly better than the instrument in detecting low frequency and broadband calls in the field, whereas the recorder detected high frequency calls with greater probability. To address the problem of pseudoreplication during spot sampling in the field, we monitored the movement of calling individuals using focal animal sampling. The average distance moved by calling individuals for 17 out of 20 species was less than 1.5 m in half an hour. We suggest that trained listener-based sampling is preferable for crickets and low frequency katydids, whereas broadband recorders are preferable for katydid species with high frequency calls for accurate estimation of ensiferan species richness and relative abundance in an area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new parallel algorithm for nonlinear transient dynamic analysis of large structures has been presented. An unconditionally stable Newmark-beta method (constant average acceleration technique) has been employed for time integration. The proposed parallel algorithm has been devised within the broad framework of domain decomposition techniques. However, unlike most of the existing parallel algorithms (devised for structural dynamic applications) which are basically derived using nonoverlapped domains, the proposed algorithm uses overlapped domains. The parallel overlapped domain decomposition algorithm proposed in this paper has been formulated by splitting the mass, damping and stiffness matrices arises out of finite element discretisation of a given structure. A predictor-corrector scheme has been formulated for iteratively improving the solution in each step. A computer program based on the proposed algorithm has been developed and implemented with message passing interface as software development environment. PARAM-10000 MIMD parallel computer has been used to evaluate the performances. Numerical experiments have been conducted to validate as well as to evaluate the performance of the proposed parallel algorithm. Comparisons have been made with the conventional nonoverlapped domain decomposition algorithms. Numerical studies indicate that the proposed algorithm is superior in performance to the conventional domain decomposition algorithms. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of geophysical methods have been proposed for near-surface site characterization and measurement of shear wave velocity by using a great variety of testing configurations, processing techniques,and inversion algorithms. In particular, two widely-used techniques are SASW (Spectral Analysis of SurfaceWaves) and MASW (Multichannel Analysis of SurfaceWaves). MASW is increasingly being applied to earthquake geotechnical engineering for the local site characterization, microzonation and site response studies.A MASW is a geophysical method, which generates a shear-wave velocity (Vs) profile (i.e., Vs versus depth)by analyzing Raleigh-type surface waves on a multichannel record. MASW system consisting of 24 channels Geode seismograph with 24 geophones of 4.5 Hz frequency have been used in this investigation. For the site characterization program, the MASW field experiments consisting of 58 one-dimensional shear wave velocity tests and 20 two-dimensional shear wave tests have been carried out. The survey points have been selected in such a way that the results supposedly represent the whole metropolitan Bangalore having an area of 220 km2.The average shear wave velocity of Bangalore soils have been evaluated for depths of 5m, 10m, 15m, 20m, 25m and 30 m. The subsoil site classification has been made for seismic local site effect evaluation based on average shear wave velocity of 30m depth (Vs30) of sites using National Earthquake Hazards Reduction Program (NEHRP) and International Building Code (IBC) classification. Soil average shearwave velocity estimated based on overburden thickness from the borehole information is also presented. Mapping clearly indicates that the depth of soil obtained from MASW is closely matching with the soil layers in bore logs. Among total 55 locations of MASW survey carried out, 34 locations were very close to the SPT borehole locations and these are used to generate correlation between Vs and corrected “N” values. The SPT field “N” values are corrected by applying the NEHRP recommended corrections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The synthesis of cobalt-doped ZnO nanowires is achieved using a simple, metal salt decomposition growth technique. A sequence of drop casting on a quartz substrate held at 100 degrees C and annealing results in the growth of nanowires of average (modal) length similar to 200 nm and diameter of 15 +/- 4 nm and consequently an aspect ratio of similar to 13. A variation in the synthesis process, where the solution of mixed salts is deposited on the substrate at 25 degrees C, yields a grainy film structure which constitutes a useful comparator case. X-ray diffraction shows a preferred 0001] growth direction for the nanowires while a small unit cell volume contraction for Co-doped samples and data from Raman spectroscopy indicate incorporation of the Co dopant into the lattice; neither technique shows explicit evidence of cobalt oxides. Also the nanowire samples display excellent optical transmission across the entire visible range, as well as strong photoluminescence (exciton emission) in the near UV, centered at 3.25 eV. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Y3Fe5O12 (YIG) nanopowders were synthesised at different pH using co-precipitation method. The effect of pH on the phase formation of YIG is characterised using XRD, TEM, FTIR and TG/DTA. From the Scherer formula, the particle sizes of the powders were found to be 13, 19 and 28 nm for pH=10, 11 and 12 respectively. It is found that as the pH of the solution increase the particle size is also increases. It is also clear from the TG/DTA curves that as the pH is increasing the weight losses were found to be small. The nanopowders were sintered at 600, 700, 800 and 900 degrees C for 5 h using conventional sintering method. The phase formation is completed at 800 degrees C/5 h which is correlated with TG/DTA. The average grain size of the samples is found to be similar to 161 nm. The high values of M-s=23 emu g(-1) and H-c=22 Oe were recorded for the sample sintered at 900 degrees C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A controllable synthesis of phase pure wurtzite (WZ) ZnS nanostructures has been reported in this work at a low temperature of similar to 220 degrees C using ethylenediamine as the soft template and by varying the molar concentration of zinc to sulphur precursors as well as by using different precursors. A significant reduction in the formation temperature required for the synthesis of phase pure WZ ZnS has been observed. A strong correlation has been observed between the morphology of the synthesized ZnS nanostructures and the precursors used during synthesis. It has been found from Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM) image analyses that the morphology of the ZnS nanocrystals changes from a block-like to a belt-like structure having an average length of similar to 450 nm when the molar ratio of zinc to sulphur source is increased from 1 : 1 to 1 : 3. An oriented attachment (OA) growth mechanism has been used to explain the observed shape evolution of the synthesized nanostructures. The synthesized nanostructures have been characterized by the X-ray diffraction technique as well as by UV-Vis absorption and photoluminescence (PL) emission spectroscopy. The as-synthesized nanobelts exhibit defect related visible PL emission. On isochronal annealing of the nanobelts in air in the temperature range of 100-600 degrees C, it has been found that white light emission with a Commission Internationale de I'Eclairage 1931 (CIE) chromaticity coordinate of (0.30, 0.34), close to that of white light (0.33, 0.33), can be obtained from the ZnO nanostructures obtained at an annealing temperature of 600 degrees C. UV light driven degradation of methylene blue (MB) dye aqueous solution has also been demonstrated using as-synthesized nanobelts and similar to 98% dye degradation has been observed within only 40 min of light irradiation. The synthesized nanobelts with visible light emission and having dye degradation activity can be used effectively in future optoelectronic devices and in water purification for cleaning of dyes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fluctuations exhibited by the cross sections generated in a compound-nucleus reaction or, more generally, in a quantum-chaotic scattering process, when varying the excitation energy or another external parameter, are characterized by the width Gamma(corr) of the cross-section correlation function. Brink and Stephen Phys. Lett. 5, 77 (1963)] proposed a method for its determination by simply counting the number of maxima featured by the cross sections as a function of the parameter under consideration. They stated that the product of the average number of maxima per unit energy range and Gamma(corr) is constant in the Ercison region of strongly overlapping resonances. We use the analogy between the scattering formalism for compound-nucleus reactions and for microwave resonators to test this method experimentally with unprecedented accuracy using large data sets and propose an analytical description for the regions of isolated and overlapping resonances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Undoped and Cr (3% and 5%) doped CdS nanoparticles were synthesized by chemical co-precipitation method. The synthesized nanocrystalline particles are characterized by energy dispersive X-ray analysis (EDAX), scanning electron microscope (SEM), X-ray Diffraction (XRD), transmission electron microscopy (TEM), diffuse reflectance spectroscopy (DRS), photoluminescence (PL), Electron paramagnetic resonance (EPR), vibrating sample magnetometer (VSM) and Raman spectroscopy. XRD studies indicate that Cr doping in host CdS result a structural change from Cubic phase to mixed (cubic + hexagonal) phase. Due to quantum confinement effect, widening of the band gap is observed for undoped and Cr doped CdS nanoparticles compared to bulk CdS. The average particle size calculated from band gap values is in good agreement with the TEM study calculation and it is around 4-5 nm. A strong violet emission band consisting of two emission peaks is observed for undoped CdS nanoparticles, whereas for CdS:Cr nanoparticles, a broad emission band ranging from 420 nm to 730 nm with a maximum at similar to 587 nm is observed. The broad emission band is due to the overlapped emissions from variety of defects. EPR spectra of CdS:Cr samples reveal resonance signal at g = 2.143 corresponding to interacting Cr3+ ions. VSM studies indicate that the diamagnetic CdS nanoparticles are transform to ferromagnetic for 3% Cr3+ doping and the ferromagnetic nature is diminished with increasing the doping concentration to 5%. (C) 2015 Elsevier B.V. All rights reserved.