981 resultados para Sequential error ratio


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study reports the synthesis of extremely high aspect ratios (>3000) organic semiconductor nanowires of Ag–tetracyanoquinodimethane (AgTCNQ) on the surface of a flexible Ag fabric for the first time. These one-dimensional (1D) hybrid Ag/AgTCNQ nanostructures are attained by a facile, solution-based spontaneous reaction involving immersion of Ag fabrics in an acetonitrile solution of TCNQ. Further, it is discovered that these AgTCNQ nanowires show outstanding antibacterial performance against both Gram negative and Gram positive bacteria, which outperforms that of pristine Ag. The outcomes of this study also reflect upon a fundamentally important aspect that the antimicrobial performance of Ag-based nanomaterials may not necessarily be solely due to the amount of Ag+ ions leached from these nanomaterials, but that the nanomaterial itself may also play a direct role in the antimicrobial action. Notably, the applications of metal-organic semiconducting charge transfer complexes of metal-7,7,8,8-tetracyanoquinodimethane (TCNQ) have been predominantly restricted to electronic applications, except from our recent reports on their (photo)catalytic potential and the current case on antimicrobial prospects. This report on growth of these metal-TCNQ complexes on a fabric not only widens the window of these interesting materials for new biological applications, it also opens the possibilities for developing large-area flexible electronic devices by growing a range of metal-organic semiconducting materials directly on a fabric surface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bangkok Metropolitan Region (BMR) is the centre for various major activities in Thailand including political, industry, agriculture, and commerce. Consequently, the BMR is the highest and most densely populated area in Thailand. Thus, the demand for houses in the BMR is also the largest, especially in subdivision developments. For these reasons, the subdivision development in the BMR has increased substantially in the past 20 years and generated large numbers of subdivision developments (AREA, 2009; Kridakorn Na Ayutthaya & Tochaiwat, 2010). However, this dramatic growth of subdivision development has caused several problems including unsustainable development, especially for subdivision neighbourhoods, in the BMR. There have been rating tools that encourage the sustainability of neighbourhood design in subdivision development, but they still have practical problems. Such rating tools do not cover the scale of the development entirely; and they concentrate more on the social and environmental conservation aspects, which have not been totally accepted by the developers (Boonprakub, 2011; Tongcumpou & Harvey, 1994). These factors strongly confirm the need for an appropriate rating tool for sustainable subdivision neighbourhood design in the BMR. To improve level of acceptance from all stakeholders in subdivision developments industry, the new rating tool should be developed based on an approach that unites the social, environmental, and economic approaches, such as eco-efficiency principle. Eco-efficiency is the sustainability indicator introduced by the World Business Council for Sustainable Development (WBCSD) since 1992. The eco-efficiency is defined as the ratio of the product or service value according to its environmental impact (Lehni & Pepper, 2000; Sorvari et al., 2009). Eco-efficiency indicator is concerned to the business, while simultaneously, is concerned with to social and the environment impact. This study aims to develop a new rating tool named "Rating for sustainable subdivision neighbourhood design (RSSND)". The RSSND methodology is developed by a combination of literature reviews, field surveys, the eco-efficiency model development, trial-and-error technique, and the tool validation process. All required data has been collected by the field surveys from July to November 2010. The ecoefficiency model is a combination of three different mathematical models; the neighbourhood property price (NPP) model, the neighbourhood development cost (NDC) model, and the neighbourhood occupancy cost (NOC) model which are attributable to the neighbourhood subdivision design. The NPP model is formulated by hedonic price model approach, while the NDC model and NOC model are formulated by the multiple regression analysis approach. The trial-and-error technique is adopted for simplifying the complex mathematic eco-efficiency model to a user-friendly rating tool format. Credibility of the RSSND has been validated by using both rated and non-rated of eight subdivisions. It is expected to meet the requirements of all stakeholders which support the social activities of the residents, maintain the environmental condition of the development and surrounding areas, and meet the economic requirements of the developers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To examine between eye differences in corneal higher order aberrations and topographical characteristics in a range of refractive error groups. Methods: One hundred and seventy subjects were recruited including; 50 emmetropic isometropes, 48 myopic isometropes (spherical equivalent anisometropia ≤ 0.75 D), 50 myopic anisometropes (spherical equivalent anisometropia ≥ 1.00 D) and 22 keratoconics. The corneal topography of each eye was captured using the E300 videokeratoscope (Medmont, Victoria, Australia) and analyzed using custom written software. All left eye data were rotated about the vertical midline to account for enantiomorphism. Corneal height data were used to calculate the corneal wavefront error using a ray tracing procedure and fit with Zernike polynomials (up to and including the eighth radial order). The wavefront was centred on the line of sight by using the pupil offset value from the pupil detection function in the videokeratoscope. Refractive power maps were analysed to assess corneal sphero-cylindrical power vectors. Differences between the more myopic (or more advanced eye for keratoconics) and the less myopic (advanced) eye were examined. Results: Over a 6 mm diameter, the cornea of the more myopic eye was significantly steeper (refractive power vector M) compared to the fellow eye in both anisometropes (0.10 ± 0.27 D steeper, p = 0.01) and keratoconics (2.54 ± 2.32 D steeper, p < 0.001) while no significant interocular difference was observed for isometropic emmetropes (-0.03 ± 0.32 D) or isometropic myopes (0.02 ± 0.30 D) (both p > 0.05). In keratoconic eyes, the between eye difference in corneal refractive power was greatest inferiorly (associated with cone location). Similarly, in myopic anisometropes, the more myopic eye displayed a central region of significant inferior corneal steepening (0.15 ± 0.42 D steeper) relative to the fellow eye (p = 0.01). Significant interocular differences in higher order aberrations were only observed in the keratoconic group for; vertical trefoil C(3,-3), horizontal coma C(3,1) secondary astigmatism along 45 C(4, -2) (p < 0.05) and vertical coma C(3,-1) (p < 0.001). The interocular difference in vertical pupil decentration (relative to the corneal vertex normal) increased with between eye asymmetry in refraction (isometropia 0.00 ± 0.09, anisometropia 0.03 ± 0.15 and keratoconus 0.08 ± 0.16 mm) as did the interocular difference in corneal vertical coma C (3,-1) (isometropia -0.006 ± 0.142, anisometropia -0.037 ± 0.195 and keratoconus -1.243 ± 0.936 μm) but only reached statistical significance for pair-wise comparisons between the isometropic and keratoconic groups. Conclusions: There is a high degree of corneal symmetry between the fellow eyes of myopic and emmetropic isometropes. Interocular differences in corneal topography and higher order aberrations are more apparent in myopic anisometropes and keratoconics due to regional (primarily inferior) differences in topography and between eye differences in vertical pupil decentration relative to the corneal vertex normal. Interocular asymmetries in corneal optics appear to be associated with anisometropic refractive development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Postoperative chemotherapy is currently not recommended for resected non-small cell lung cancer in many countries and centers. Recently, results of several large randomized clinical trials were reported with conflicting evidence. Accordingly, we sought to determine whether postoperative chemotherapy is associated with improved survival compared with that after surgical intervention alone. Methods Randomized clinical trials with cisplatin- or uracil plus ftorafur-containing regimens were included and evaluated separately. A systematic review that included randomized clinical trials performed before 1995 was identified and found to be of adequate quality. Further randomized controlled trials were identified by searching MEDLINE, EMBASE, and the Cochrane Controlled Trials Register from 1995 through 2004. In addition, the reference lists of articles and conference abstracts were searched. The logarithm of the hazard ratio and its standard error were calculated, and a fixed-effect model was used to combine the estimates. Results There were 7200 patients enrolled in 19 trials included in the analyses. An overall estimate of 13% relative reduction in mortality (95% confidence interval, 7%-19%) was found. There was 11% relative reduction in mortality associated with postoperative cisplatin (95% confidence interval, 4%-18%; P = .004) and 17% associated with uracil plus ftorafur (95% confidence interval, 5%-27%; P = .006) compared with that after surgical intervention alone. This means that there would be an additional survivor at 5 years for 25 patients treated with cisplatin or for 30 patients treated with uracil plus ftorafur. Conclusions Postoperative chemotherapy is associated with improved survival compared with that after surgical intervention alone. Selected patients with completely resected non-small cell lung cancer should be offered chemotherapy. Copyright © 2004 by The American Association for Thoracic Surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To document change in prevalence of obesity, diabetes and other cardiovascular diease (CVD) risk factors, and trends in dietary macronutrient intake, over an eight-year period in a rural Aboriginal community in central Australia. Design: Sequential cross-sectional community surveys in 1987, 1991 and 1995. Subjects: All adults (15 years and over) in the community were invited to participate. In 1987, 1991 and 1995, 335 (87% of eligible adults), 331 (76%) and 304 (68%), respectively, were surveyed. Main outcome measures: Body mass index and waist : hip ratio; blood glucose level and glucose tolerance; fasting total and high density lipoprotein (HDL) cholesterol and triglyceride levels; and apparent dietary intake (estimated by the store turnover method). Intervention: A community-based nutrition awareness and healthy lifestyle program, 1988-1990. Results: At the eight-year follow-up, the odds ratios (95% CIs) for CVD risk factors relative to baseline were obesity, 1.84 (1.28-2.66); diabetes, 1.83 (1.11-3.03); hypercholesterolaemia, 0.29 (0.20-0.42); and dyslipidaemia (high triglyceride plus low HDL cholesterol level), 4.54 (2.84-7.29). In younger women (15-24 years), there was a trebling in obesity prevalence and a four- to fivefold increase in diabetes prevalence. Store turnover data suggested a relative reduction in the consumption of refined carbohydrates and saturated fats. Conclusion: Interventions targeting nutritional factors alone are unlikely to greatly alter trends towards increasing prevalences of obesity and diabetes. In communities where healthy food choices are limited, the role of regular physical activity in improving metabolic fitness may also need to be emphasised.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Development of design guides to estimate the difference in speech interference level due to road traffic noise between a reference position and balcony position or façade position is explored. A previously established and validated theoretical model incorporating direct, specular and diffuse reflection paths is used to create a database of results across a large number of scenarios. Nine balcony types with variable acoustic treatments are assessed to provide acoustic design guidance on optimised selection of balcony acoustic treatments based on location and street type. In total, the results database contains 9720 scenarios on which multivariate linear regression is conducted in order to derive an appropriate design guide equation. The best fit regression derived is a multivariable linear equation including modified exponential equations on each of nine deciding variables, (1) diffraction path difference, (2) ratio of total specular energy to direct energy, (3) distance loss between reference position and receiver position, (4) distance from source to balcony façade, (5) height of balcony floor above street, (6) balcony depth, (7) height of opposite buildings, (8) diffusion coefficient of buildings, and; (9) balcony average absorption. Overall, the regression correlation coefficient, R2, is 0.89 with 95% confidence standard error of ±3.4 dB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Body composition of 292 males aged between 18 and 65 years was measured using the deuterium oxide dilution technique. Participants were divided into development (n=146) and cross-validation (n=146) groups. Stature, body weight, skinfold thickness at eight sites, girth at five sites, and bone breadth at four sites were measured and body mass index (BMI), waist-to-hip ratio (WHR), and waist-to-stature ratio (WSR) calculated. Equations were developed using multiple regression analyses with skinfolds, breadth and girth measures, BMI, and other indices as independent variables and percentage body fat (%BF) determined from deuterium dilution technique as the reference. All equations were then tested in the cross-validation group. Results from the reference method were also compared with existing prediction equations by Durnin and Womersley (1974), Davidson et al (2011), and Gurrici et al (1998). The proposed prediction equations were valid in our cross-validation samples with r=0.77- 0.86, bias 0.2-0.5%, and pure error 2.8-3.6%. The strongest was generated from skinfolds with r=0.83, SEE 3.7%, and AIC 377.2. The Durnin and Womersley (1974) and Davidson et al (2011) equations significantly (p<0.001) underestimated %BF by 1.0 and 6.9% respectively, whereas the Gurrici et al (1998) equation significantly (p<0.001) overestimated %BF by 3.3% in our cross-validation samples compared to the reference. Results suggest that the proposed prediction equations are useful in the estimation of %BF in Indonesian men.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a sequential pattern based model (PMM) to detect news topics from a popular microblogging platform, Twitter. PMM captures key topics and measures their importance using pattern properties and Twitter characteristics. This study shows that PMM outperforms traditional term-based models, and can potentially be implemented as a decision support system. The research contributes to news detection and addresses the challenging issue of extracting information from short and noisy text.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.