204 resultados para Variância residual


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Passive air samplers (PAS) consisting of polyurethane foam (PUF) disks were deployed at 6 outdoor air monitoring stations in different land use categories (commercial, industrial, residential and semi-rural) to assess the spatial distribution of polybrominated diphenyl ethers (PBDEs) in the Brisbane airshed. Air monitoring sites covered an area of 1143 km2 and PAS were allowed to accumulate PBDEs in the city's airshed over three consecutive seasons commencing in the winter of 2008. The average sum of five (∑5) PBDEs (BDEs 28, 47, 99, 100 and 209) levels were highest at the commercial and industrial sites (12.7 ± 5.2 ng PUF−1), which were relatively close to the city center and were a factor of 8 times higher than residential and semi-rural sites located in outer Brisbane. To estimate the magnitude of the urban ‘plume’ an empirical exponential decay model was used to fit PAS data vs. distance from the CBD, with the best correlation observed when the particulate bound BDE-209 was not included (∑5-209) (r2 = 0.99), rather than ∑5 (r2 = 0.84). At 95% confidence intervals the model predicts that regardless of site characterization, ∑5-209 concentrations in a PAS sample taken between 4–10 km from the city centre would be half that from a sample taken from the city centre and reach a baseline or plateau (0.6 to 1.3 ng PUF−1), approximately 30 km from the CBD. The observed exponential decay in ∑5-209 levels over distance corresponded with Brisbane's decreasing population density (persons/km2) from the city center. The residual error associated with the model increased significantly when including BDE-209 levels, primarily due to the highest level (11.4 ± 1.8 ng PUF−1) being consistently detected at the industrial site, indicating a potential primary source at this site. Active air samples collected alongside the PAS at the industrial air monitoring site (B) indicated BDE-209 dominated congener composition and was entirely associated with the particulate phase. This study demonstrates that PAS are effective tools for monitoring citywide regional differences however, interpretation of spatial trends for POPs which are predominantly associated with the particulate phase such as BDE-209, may be restricted to identifying ‘hotspots’ rather than broad spatial trends.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In practical cases for active noise control (ANC), the secondary path has usually a time varying behavior. For these cases, an online secondary path modeling method that uses a white noise as a training signal is required to ensure convergence of the system. The modeling accuracy and the convergence rate are increased when a white noise with a larger variance is used. However, the larger variance increases the residual noise, which decreases performance of the system and additionally causes instability problem to feedback structures. A sudden change in the secondary path leads to divergence of the online secondary path modeling filter. To overcome these problems, this paper proposes a new approach for online secondary path modeling in feedback ANC systems. The proposed algorithm uses the advantages of white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the algorithm and to prevent the instability effect of the white noise. In this approach, instead of continuous injection of the white noise, a sudden change in secondary path during the operation makes the algorithm to reactivate injection of the white noise to correct the secondary path estimation. In addition, the proposed method models the secondary path without the need of using off-line estimation of the secondary path. Considering the above features increases the convergence rate and modeling accuracy, which results in a high system performance. Computer simulation results shown in this paper indicate effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Menopause is the consequence of exhaustion of the ovarian follicular pool. AMH, an indirect hormonal marker of ovarian reserve, has been recently proposed as a predictor for age at menopause. Since BMI and smoking status are relevant independent factors associated with age at menopause we evaluated whether a model including all three of these variables could improve AMH-based prediction of age at menopause. Methods: In the present cohort study, participants were 375 eumenorrheic women aged 19–44 years and a sample of 2,635 Italian menopausal women. AMH values were obtained from the eumenorrheic women. Results: Regression analysis of the AMH data showed that a quadratic function of age provided a good description of these data plotted on a logarithmic scale, with a distribution of residual deviates that was not normal but showed significant leftskewness. Under the hypothesis that menopause can be predicted by AMH dropping below a critical threshold, a model predicting menopausal age was constructed from the AMH regression model and applied to the data on menopause. With the AMH threshold dependent on the covariates BMI and smoking status, the effects of these covariates were shown to be highly significant. Conclusions: In the present study we confirmed the good level of conformity between the distributions of observed and AMH-predicted ages at menopause, and showed that using BMI and smoking status as additional variables improves AMH-based prediction of age at menopause.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An online secondary path modelling method using a white noise as a training signal is required in many applications of active noise control (ANC) to ensure convergence of the system. Not continually injection of white noise during system operation makes the system more desirable. The purposes of the proposed method are two folds: controlling white noise by preventing continually injection, and benefiting white noise with a larger variance. The modelling accuracy and the convergence rate increase when a white noise with larger variance is used, however larger the variance increases the residual noise, which decreases performance of the system. This paper proposes a new approach for online secondary path modelling in feedfoward ANC systems. The proposed algorithm uses the advantages of the white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the system. Comparative simulation results shown in this paper indicate effectiveness of the proposed approach in controlling active noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many applications of active noise control (ANC), an online secondary path modelling method using a white noise as a training signal is required to ensure convergence of the system. The modelling accuracy and the convergence rate increase when a white noise with larger variance is used, however larger the variance increases the residual noise, which decreases performance of the system. The proposed algorithm uses the advantages of the white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the system. In this approach, instead of continuous injection of the white noise, a sudden change in secondary path during the operation makes the algorithm to reactivate injection of the white noise to adjust the secondary path estimation. Comparative simulation results shown in this paper indicate effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biological validation of new radiotherapy modalities is essential to understand their therapeutic potential. Antiprotons have been proposed for cancer therapy due to enhanced dose deposition provided by antiproton-nucleon annihilation. We assessed cellular DNA damage and relative biological effectiveness (RBE) of a clinically relevant antiproton beam. Despite a modest LET (~19 keV/μm), antiproton spread out Bragg peak (SOBP) irradiation caused significant residual γ-H2AX foci compared to X-ray, proton and antiproton plateau irradiation. RBE of ~1.48 in the SOBP and ~1 in the plateau were measured and used for a qualitative effective dose curve comparison with proton and carbon-ions. Foci in the antiproton SOBP were larger and more structured compared to X-rays, protons and carbon-ions. This is likely due to overlapping particle tracks near the annihilation vertex, creating spatially correlated DNA lesions. No biological effects were observed at 28–42 mm away from the primary beam suggesting minimal risk from long-range secondary particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: UC is a disease of the entire urothelium, characterized by multiplicity and multifocality. The clonal relationship among multiple UCs has implications regarding adjuvant chemotherapy. It has been investigated in studies of chromosomal alteration and single gene mutation. However, these genetic changes can occur in unrelated tumors under similar carcinogenic selection pressures. Tumors with high MSI have numerous DNA mutations, of which many provide no selection benefit. While these tumors represent an ideal model for studying UC clonality, their low frequency has prevented their previous investigation. Materials and Methods: We investigated 32 upper and lower urinary tract UCs with high MSI and 4 nonUC primary cancers in 9 patients. We used the high frequency and specificity of individual DNA mutations in these tumors (MSI at 17 loci) and the early timing of epigenetic events (methylation of 7 gene promoters) to investigate tumor clonality. Results: Molecular alterations varied among tumors from different primary organs but they appeared related in the UCs of all 9 patients. While 7 patients had a high degree of concordance among UCs, in 2 the UCs shared only a few similar alterations. Genetic and epigenetic abnormalities were frequently found in normal urothelial samples. Conclusions: Multiple UCs in each patient appeared to arise from a single clone. The molecular order of tumor development varied from the timing of clinical presentation and suggested that residual malignant cells persist in the urinary tract despite apparent curative surgery. These cells lead to subsequent tumor relapse and new methods are required to detect and eradicate them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our task is to consider the evolving perspectives around curriculum documented in the Theory Into Practice (TIP) corpus to date. The 50 years in question, 1962–2012, account for approximately half the history of mass institutionalized schooling. Over this time, the upper age of compulsory schooling has crept up, stretching the school curriculum's reach, purpose, and clientele. These years also span remarkable changes in the social fabric, challenging deep senses of the nature and shelf-life of knowledge, whose knowledge counts, what science can and cannot deliver, and the very purpose of education. The school curriculum is a key social site where these challenges have to be addressed in a very practical sense, through a design on the future implemented within the resources and politics of the present. The task's metaphor of ‘evolution’ may invoke a sense of gradual cumulative improvement, but equally connotes mutation, hybridization, extinction, survival of the fittest, and environmental pressures. Viewed in this way, curriculum theory and practice cannot be isolated and studied in laboratory conditions—there is nothing natural, neutral, or self-evident about what knowledge gets selected into the curriculum. Rather, the process of selection unfolds as a series of messy, politically contaminated, lived experiments; thus curriculum studies require field work in dynamic open systems. We subscribe to Raymond Williams' approach to social change, which he argues is not absolute and abrupt, one set of ideas neatly replacing the other. For Williams, newly emergent ideas have to compete against the dominant mindset and residual ideas “still active in the cultural process'” (Williams, 1977, p. 122). This means ongoing debates. For these reasons, we join Schubert (1992) in advocating “continuous reconceptualising of the flow of experience” (p. 238) by both researchers and practitioners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

STUDY DESIGN: Reliability and case-control injury study. OBJECTIVES: 1) To determine if a novel device, designed to measure eccentric knee flexors strength via the Nordic hamstring exercise (NHE), displays acceptable test-retest reliability; 2) to determine normative values for eccentric knee flexors strength derived from the device in individuals without a history of hamstring strain injury (HSI) and; 3) to determine if the device could detect weakness in elite athletes with a previous history of unilateral HSI. BACKGROUND: HSIs and reinjuries are the most common cause of lost playing time in a number of sports. Eccentric knee flexors weakness is a major modifiable risk factor for future HSIs, however there is a lack of easily accessible equipment to assess this strength quality. METHODS: Thirty recreationally active males without a history of HSI completed NHEs on the device on 2 separate occasions. Intraclass correlation coefficients (ICCs), typical error (TE), typical error as a co-efficient of variation (%TE), and minimum detectable change at a 95% confidence interval (MDC95) were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed NHEs on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. RESULTS: The device displayed high to moderate reliability (ICC = 0.83 to 0.90; TE = 21.7 N to 27.5 N; %TE = 5.8 to 8.5; MDC95 = 76.2 to 60.1 N). Mean±SD normative eccentric flexors strength, based on the uninjured group, was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limbs were 15% weaker than the contralateral uninjured limbs (mean difference = 50.3 N; 95% CI = 25.7 to 74.9N; P < .01), 15% weaker than the normative left limb data (mean difference = 50.0 N; 95% CI = 1.4 to 98.5 N; P = .04) and 18% weaker than the normative right limb data (mean difference = 66.5 N; 95% CI = 18.0 to 115.1 N; P < .01). CONCLUSIONS: The experimental device offers a reliable method to determine eccentric knee flexors strength and strength asymmetry and revealed residual weakness in previously injured elite athletes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A refined plastic hinge method suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in a companion paper. The method implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the method for the analysis of steel frames comprising non-compact sections is established in this paper by comparison with a comprehensive range of analytical benchmark frame solutions. The refined plastic hinge method is shown to be more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Awareness to avoid losses and casualties due to rain-induced landslide is increasing in regions that routinely experience heavy rainfall. Improvements in early warning systems against rain-induced landslide such as prediction modelling using rainfall records, is urgently needed in vulnerable regions. The existing warning systems have been applied using stability chart development and real-time displacement measurement on slope surfaces. However, there are still some drawbacks such as: ignorance of rain-induced instability mechanism, mislead prediction due to the probabilistic prediction and short time for evacuation. In this research, a real-time predictive method was proposed to alleviate the drawbacks mentioned above. A case-study soil slope in Indonesia that failed in 2010 during rainfall was used to verify the proposed predictive method. Using the results from the field and laboratory characterizations, numerical analyses can be applied to develop a model of unsaturated residual soils slope with deep cracks and subject to rainwater infiltration. Real-time rainfall measurement in the slope and the prediction of future rainfall are needed. By coupling transient seepage and stability analysis, the variation of safety factor of the slope with time were provided as a basis to develop method for the real-time prediction of the rain-induced instability of slopes. This study shows the proposed prediction method has the potential to be used in an early warning system against landslide hazard, since the FOS value and the timing of the end-result of the prediction can be provided before the actual failure of the case study slope.