881 resultados para total antioxidant performance assay
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.
Resumo:
The Ajax approach has outgrown its origin as shorthand for "Asynchronous JavaScript + XML". Three years after its naming, Ajax has become widely adopted by web applications. Therefore, there exists a growing interest in using those applications with mobile devices. This thesis evaluates the presentational capability and measures the performance of five mobile browsers on the Apple iPhone and Nokia models N95 and N800. Performance is benchmarked through user-experienced response times as measured with a stopwatch. 12 Ajax toolkit examples and 8 production-quality applications are targeted, all except one in their real environments. In total, over 1750 observations are analyzed and included in the appendix. Communication delays are not considered; the network connection type is WLAN. Results indicate that the initial loading time of an Ajax application can often exceed 20 seconds. Content reordering may be used to partially overcome this limitation. Proper testing is the key for success: the selected browsers are capable of presenting Ajax applications if their differing implementations are overcome, perhaps using a suitable toolkit.
Resumo:
Difficulties in the performance of activities of daily living (ADL) are a key feature of developmental coordination disorder (DCD). The DCDDaily-Q was developed to address children's motor performance in a comprehensive range ADL. The aim of this study was to investigate the psychometric properties of this parental questionnaire. Parents of 218 five to eight year-old children (DCD group: N=25; reference group: N=193) completed the research version of the new DCDDaily-Q and the Movement Assessment Battery for Children-2 (MABC2) Checklist and Developmental Coordination Disorder Questionnaire (DCDQ). Children were assessed with the MABC2 and DCDDaily. Item reduction analyses were performed and reliability (internal consistency and factor structure) and concurrent, discriminant, and incremental validity of the DCDDaily-Q were investigated. The final version of the DCDDaily-Q comprises 23 items that cover three underlying factors and shows good internal consistency (Cronbach's α>.80). Moderate correlations were found between the DCDDaily-Q and the other instruments used (p<.001 for the reference group; p>.05 for the DCD group). Discriminant validity of the DCDDaily-Q was good for DCDDaily-Q total scores (p<.001) and all 23 item scores (p<.01), indicating poorer performance in the DCD group. Sensitivity (88%) and specificity (92%) were good. The DCDDaily-Q better predicted DCD than currently used questionnaires (R2=.88). In conclusion, the DCDDaily-Q is a valid and reliable questionnaire to address children's ADL performance.
Resumo:
A randomised and population-based screening design with new technologies has been applied to the organised cervical cancer screening programme in Finland. In this experiment the women invited to routine five-yearly screening are individually randomised to be screened with automation-assisted cytology, human papillomavirus (HPV) test or conventional cytology. By using the randomised design, the ultimate aim is to assess and compare the long-term outcomes of the different screening regimens. The primary aim of the current study was to evaluate, based on the material collected during the implementation phase of the Finnish randomised screening experiment, the cross-sectional performance and validity of automation-assisted cytology (Papnet system) and primary HPV DNA testing (Hybrid Capture II assay for 13 oncogenic HPV types) within service screening, in comparison to conventional cytology. The parameters of interest were test positivity rate, histological detection rate, relative sensitivity, relative specificity and positive predictive value. Also, the effect of variation in performance by screening laboratory on age-adjusted cervical cancer incidence was assessed. Based on the cross-sectional results, almost no differences were observed in the performance of conventional and automation-assisted screening. Instead, primary HPV screening found 58% (95% confidence interval 19-109%) more cervical lesions than conventional screening. However, this was mainly due to overrepresentation of mild- and moderate-grade lesions and, thus, is likely to result in overtreatment since a great deal of these lesions would never progress to invasive cancer. Primary screening with an HPV DNA test alone caused substantial loss in specificity in comparison to cytological screening. With the use of cytology triage test, the specificity of HPV screening improved close to the level of conventional cytology. The specificity of primary HPV screening was also increased by increasing the test positivity cutoff from the level recommended for clinical use, but the increase was more modest than the one gained with the use of cytology triage. The performance of the cervical cancer screening programme varied widely between the screening laboratories, but the variation in overall programme effectiveness between respective populations was more marginal from the very beginning of the organised screening activity. Thus, conclusive interpretations on the quality or success of screening should not be based on performance parameters only. In the evaluation of cervical cancer screening the outcome should be selected as closely as possible to the true measure of programme effectiveness, which is the number of invasive cervical cancers and subsequent deaths prevented in the target population. The evaluation of benefits and adverse effects of each new suggested screening technology should be performed before the technology becomes an accepted routine in the existing screening programme. At best, the evaluation is performed randomised, within the population and screening programme in question, which makes the results directly applicable to routine use.
Resumo:
Although BaZr 0.8Y 0.2O 3-δ(BZY) possesses large bulk proton conductivity and excellent chemical stability, its poor sinterability and grain boundaries block proton conduction. In this work, the effect of Ca as a co-dopant and as a sintering aid (as CaO), on the sinterability, proton conductivity, and fuel cell performance of BZY was investigated. The addition of 4 mol% CaO significantly improved the BZY sinterability: BZY pellets with densities of 92.7% and 97.5% with respect to the theoretical density were obtained after sintering at 1500°C and 1600°C, respectively. The improved BZY sinterability by CaO addition resulted also in a large proton conductivity; at 600°C, the total conductivity of BZY-CaO was 2.14 × 10 -3 S/cm, in wet Ar. Anode-supported fuel cells with 25 μm-thick BZY-CaO electrolyte membranes were fabricated by a dual-layer co-firing technique. The peak power density of the fuel cell with a BZY-Ni/BZY-4CaO/BZY-LSCF (La 0.6Sr 0.4Fe 0.8Co 0.2O 3-δ) configuration was 141 mW/cm 2 at 700°C, several times larger than the reported values of BZY electrolyte membrane fuel cells sintered with the addition of CuO or ZnO, demonstrating promising features for practical fuel cell applications.
Resumo:
At present, many countries have either embraced ISO9001 or used it as the basis of their national quality certification systems. However, few studies have been conducted to examine the benefits companies’ gain from achieving and implementing ISO9001 standards (Chikuku et al. 2012; Psomas et al. 2013; Sampaio et al. 2011a,b). Analysis has brought much more confused and uneven results across the countries. Turning to the experience of Malaysia, this country has witnessed a spectacular growth at an average rate of 9.89% per annum of ISO certificates issued to companies operating within its borders (ISO Survey 2012). While many companies rush to be ISO 9001 certified whether this brings about better benefits (both the financial and the non-financial) is still an open question. In this study, the research problems were first formulated from the literature and then a questionnaire survey was conducted to test the hypotheses. A survey was administered to chief executives officers and managers across manufacturing and service organizations in Malaysia. Multivariate analysis and SPSS macro developed by Preacher and Hayes were used as statistical techniques to the financial and non-financial benefits of ISO9001 certification. The survey instrument was a two-page questionnaire comprising three sections. The first section of the questionnaire covered the company’s profile. The second section consisted of 25 items on internal benefits and third section consisted of 7 items on external benefits measured on 1–5 Likert scale to assess the benefits of ISO9001 certification. Total 201 valid responses were received. Results of the study indicate that there was no significant direct relationship between ISO9001 certification and organizational financial performance, while strong statistical evidence was found to support the direct relationship between ISO9001 certification and non-financial performance. The findings of the study discovered that financial performance is actually directly related to two non-financial measures, namely quality performance and local and international business performance, which are directly and significantly influenced by ISO9001 certification. Therefore non-financial performance measures are involved in the mediational process. The findings will assist practitioners in taking right courses of action that make the implementation of this standard more effective. For example, the study findings study suggests that companies should put emphasize on nonfinancial factors to improve their financial performance.
Resumo:
AIM AND BACKGROUND: While the importance of morale is well researched in the nursing literature, strategies and interventions are not so prolific. The complexities of interpersonal relationships within the clinical domain, and the critical issues faced by nurses on a daily basis, indicate that morale, job satisfaction and motivation are essential components in improving workplace efficiency, output and communication amongst staff. Drawing on educational, organizational and psychological literature, this paper argues that the ability to inspire morale in staff is a fundamental indicator of sound leadership and managerial characteristics. EVALUATION AND KEY ISSUES: Four practical concepts that could be implemented in the clinical setting are proposed. These include: role preparation for managers, understanding internal and external motivation, fostering internal motivation in nursing staff, and the importance of attitude when investing in relationships.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
H.264 video standard achieves high quality video along with high data compression when compared to other existing video standards. H.264 uses context-based adaptive variable length coding (CAVLC) to code residual data in Baseline profile. In this paper we describe a novel architecture for CAVLC decoder including coeff-token decoder, level decoder total-zeros decoder and run-before decoder UMC library in 0.13 mu CMOS technology is used to synthesize the proposed design. The proposed design reduces chip area and improves critical path performance of CAVLC decoder in comparison with [1]. Macroblock level (including luma and chroma) pipeline processing for CAVLC is implemented with an average of 141 cycles (including pipeline buffering) per macroblock at 250MHz clock frequency. To compare our results with [1] clock frequency is constrained to 125MHz. The area required for the proposed architecture is 17586 gates, which is 22.1% improvement in comparison to [1]. We obtain a throughput of 1.73 * 10(6) macroblocks/second, which is 28% higher than that reported in [1]. The proposed design meets the processing requirement of 1080HD [5] video at 30frames/seconds.
Resumo:
Stroke is a major cause of death and disability, incurs significant costs to healthcare systems, and inflicts severe burden to the whole society. Stroke care in Finland has been described in several population-based studies between 1967 and 1998, but not since. In the PERFECT Stroke study presented here, a system for monitoring the Performance, Effectiveness, and Costs of Treatment episodes in Stroke was developed in Finland. Existing nationwide administrative registries were linked at individual patient level with personal identification numbers to depict whole episodes of care, from acute stroke, through rehabilitation, until the patients went home, were admitted to permanent institutional care, or died. For comparisons in time and between providers, patient case-mix was adjusted for. The PERFECT Stroke database includes 104 899 first-ever stroke patients over the years 1999 to 2008, of whom 79% had ischemic stroke (IS), 14% intracerebral hemorrhage (ICH), and 7% subarachnoid hemorrhage (SAH). A 18% decrease in the age and sex adjusted incidence of stroke was observed over the study period, 1.8% improvement annually. All-cause 1-year case-fatality rate improved from 28.6% to 24.6%, or 0.5% annually. The expected median lifetime after stroke increased by 2 years for IS patients, to 7 years and 7 months, and by 1 year for ICH patients, to 4 years 5 months. No change could be seen in median SAH patient survival, >10 years. Stroke prevalence was 82 000, 1.5% of total population of Finland, in 2008. Modern stroke center care was shown to be associated with a decrease in both death and risk of institutional care of stroke patients. Number needed to treat to prevent these poor outcomes at one year from stroke was 32 (95% confidence intervals 26 to 42). Despite improvements over the study period, more than a third of Finnish stroke patients did not have access to stroke center care. The mean first-year healthcare cost of a stroke patient was ~20 000 , and among survivors ~10 000 annually thereafter. Only part of this cost was incurred by stroke, as the same patients cost ~5000 over the year prior to stroke. Total lifetime costs after first-ever stroke were ~85 000 . A total of 1.1 Billion , 7% of all healthcare expenditure, is used in the treatment of stroke patients annually. Despite a rapidly aging population, the number of new stroke patients is decreasing, and the patients are more likely to survive. This is explained in part by stroke center care, which is effective, and should be made available for all stroke patients. It is possible, in a suitable setting with high-quality administrative registries and a common identifier, to avoid the huge workload and associated costs of setting up a conventional stroke registry, and still acquire a fairly comprehensive dataset on stroke care and outcome.
Resumo:
The main objectives in this thesis were to isolate and identify the phenolic compounds in wild (Sorbus aucuparia) and cultivated rowanberries, European cranberries (Vaccinium microcarpon), lingonberries (Vaccinium vitis-idaea), and cloudberries (Rubus chamaemorus), as well as to investigate the antioxidant activity of phenolics occurring in berries in food oxidation models. In addition, the storage stability of cloudberry ellagitannin isolate was studied. In wild and cultivated rowanberries, the main phenolic compounds were chlorogenic acids and neochlorogenic acids with increasing anthocyanin content depending on the crossing partners. The proanthocyanidin contents of cranberries and lingonberries were investigated, revealing that the lingonberry contained more rare A-type dimers than the European cranberry. The liquid chromatography mass spectrometry (LC-MS) analysis of cloudberry ellagitannins showed that trimeric lambertianin C and sanguiin H-10 were the main ellagitannins. The berries, rich in different types of phenolic compounds including hydroxycinnamic acids, proanthocyanidins, and ellagitannins, showed antioxidant activity toward lipid oxidation in liposome and emulsion oxidation models. All the different rowanberry cultivars prevented lipid oxidation in the same way, in spite of the differences in their phenolic composition. In terms of liposomes, rowanberries were slightly more effective antioxidants than cranberry and lingonberry phenolics. Greater differences were found when comparing proanthocyanidin fractions. Proanthocyanidin dimers and trimers of both cranberries and lingonberries were most potent in inhibiting lipid oxidation. Antioxidant activities and antiradical capacities were also studied with hydroxycinnamic acid glycosides. The sinapic acid derivatives of the hydroxycinnamic acid glycosides were the most effective at preventing lipid oxidation in emulsions and liposomes and scavenging radicals in DPPH assay. In liposomes and emulsions, the formation of the secondary oxidation product, hexanal, was inhibited more than that of the primary oxidation product, conjugated diene hydroperoxides, by hydroxycinnamic acid derivatives. This indicates that they are principally chain-breaking antioxidants rather than metal chelators, although they possess chelating activity as well. The storage stability test of cloudberry ellagitannins was performed by storing ellagitannin isolate and ellagitannins encapsulated with maltodextrin at different relative vapor pressures. The storage stability was enhanced by the encapsulation when higher molecular weight maltodextrin was used. The best preservation was achieved when the capsules were stored at 0 or 33% relative vapor pressures. In addition, the antioxidant activities of encapsulated cloudberry extracts were followed during the storage period. Different storage conditions did not alter the antioxidant activity, even though changes in the ellagitannin contents were seen. The current results may be of use in improving the oxidative stability of food products by using berries as natural antioxidants.
Resumo:
A performance prediction procedure is presented for low specific speed submersible pumps with a review of loss models given in the literature. Most of the loss theories discussed in this paper are one dimensional and improvements are made with good empiricism for the prediction to cover the entire range of operation of the low specific speed pumps. Loss correlations, particularly in the low flow range, are discussed. Prediction of the shape of efficiency-capacity and total head-capacity curves agrees well with the experimental results in almost the full range of operating conditions. The approach adopted in the present analysis, of estimating the losses in the individual components of a pump, provides means for improving the performance and identifying the problem areas in existing designs of the pumps. The investigation also provides a basis for selection of parameters for the optimal design of the pumps in which the maximum efficiency is an important design parameter. The scope for improvement in the prediction procedure with the nature of flow phenomena in the low flow region has been discussed in detail.
Resumo:
Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.
Resumo:
Measurements of the three-dimensional flow field entering and leaving a mixed flow pump of non-dimensional specific speed k = 1.89 [N-s = 100 r/min (metric)] are discussed as a function of flowrate. Flow reversal at inlet at reduced flows is seen to result in abnormally high total pressures in the casing region, but causes no noticeable discontinuities on the head-flow characteristics. Inlet prerotation is associated with the transport of angular momentum by the reversal eddy and begins with the initiation of flow reversal.
Resumo:
Wave pipelining is a design technique for increasing the throughput of a digital circuit or system without introducing pipelining registers between adjacent combinational logic blocks in the circuit/system. However, this requires balancing of the delays along all the paths from the input to the output which comes the way of its implementation. Static CMOS is inherently susceptible to delay variation with input data, and hence, receives a low priority for wave pipelined digital design. On the other hand, ECL and CML, which are amenable to wave pipelining, lack the compactness and low power attributes of CMOS. In this paper we attempt to exploit wave pipelining in CMOS technology. We use a single generic building block in Normal Process Complementary Pass Transistor Logic (NPCPL), modeled after CPL, to achieve equal delay along all the propagation paths in the logic structure. An 8×8 b multiplier is designed using this logic in a 0.8 ?m technology. The carry-save multiplier architecture is modified suitably to support wave pipelining, viz., the logic depth of all the paths are made identical. The 1 mm×0.6 mm multiplier core supports a throughput of 400 MHz and dissipates a total power of 0.6 W. We develop simple enhancements to the NPCPL building blocks that allow the multiplier to sustain throughputs in excess of 600 MHz. The methodology can be extended to introduce wave pipelining in other circuits as well