165 resultados para density-dependent model
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
This work has led to the development of empirical mathematical models to quantitatively predicate the changes of morphology in osteocyte-like cell lines (MLO-Y4) in culture. MLO-Y4 cells were cultured at low density and the changes in morphology recorded over 11 hours. Cell area and three dimensional shape features including aspect ratio, circularity and solidity were then determined using widely accepted image analysis software (ImageJTM). Based on the data obtained from the imaging analysis, mathematical models were developed using the non-linear regression method. The developed mathematical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analyzing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary.
Resumo:
Heteroatom doping on the edge of graphene may serve as an effective way to tune chemical activity of carbon-based electrodes with respect to charge carrier transfer in an aqueous environment. In a step towards developing mechanistic understanding of this phenomenon, we explore herein mechanisms of proton transfer from aqueous solution to pristine and doped graphene edges utilizing density functional theory. Atomic B-, N-, and O- doped edges as well as the native graphene are examined, displaying varying proton affinities and effective interaction ranges with the H3O+ charge carrier. Our study shows that the doped edges characterized by more dispersive orbitals, namely boron and nitrogen, demonstrate more energetically favourable charge carrier exchange compared with oxygen, which features more localized orbitals. Extended calculations are carried out to examine proton transfer from the hydronium ion in the presence of explicit water, with results indicating that the basic mechanistic features of the simpler model are unchanged.
Resumo:
In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.
Resumo:
In this study, a treatment plan for a spinal lesion, with all beams transmitted though a titanium vertebral reconstruction implant, was used to investigate the potential effect of a high-density implant on a three-dimensional dose distribution for a radiotherapy treatment. The BEAMnrc/DOSXYZnrc and MCDTK Monte Carlo codes were used to simulate the treatment using both a simplified, recltilinear model and a detailed model incorporating the full complexity of the patient anatomy and treatment plan. The resulting Monte Carlo dose distributions showed that the commercial treatment planning system failed to accurately predict both the depletion of dose downstream of the implant and the increase in scattered dose adjacent to the implant. Overall, the dosimetric effect of the implant was underestimated by the commercial treatment planning system and overestimated by the simplified Monte Carlo model. The value of performing detailed Monte Carlo calculations, using the full patient and treatment geometry, was demonstrated.
Resumo:
Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Background Transfusion-related acute lung injury (TRALI) is a serious and potentially fatal consequence of transfusion. A two-event TRALI model demonstrated date-of-expiry - day (D) 5 platelet (PLT) and D42 packed red blood cell (PRBC) supernatants (SN) induced TRALI in LPS-treated sheep. We have adapted a whole blood transfusion culture model as an investigative bridge between the ovine TRALI model human responses to transfusion. Methods A whole blood transfusion model was adapted to replicate the ovine model - specifically +/- 0.23μg/mL LPS as the first event and 10% SN volume (transfusion) as the second event. Four pooled SN from blood products, previously used in the TRALI ovine model, were investigated: D1-PLT, D5-PLT, D1-PRBC, and D42-PRBC. Fresh human whole blood (recipient) was mixed with combinations of LPS and BP-SN stimuli and incubated in vitro for 6 hrs. Addition of golgi plug enabled measurement of monocyte cytokine production (IL-6, IL-8, IL-10, IL-12, TNF-α, IL-1α, CXCL-5, IP-10, MIP-1α, MCP-1) using multi-colour flow cytometry. Responses for 6 recipients were assessed. Results In the presence of LPS, D42-PRBC-SN significantly increased monocyte IL-6 (P=0.031), IL-8 (P=0.016) and IL-1α (P=0.008) production compared to D1-PRBC-SN. This response to D42-PRBC-SN was LPS-dependent, and was not evident in non-LPSstimulated controls. This response was also specific to D42-PRBC-SN, as similar changes were not evident for the D5-PLT-SN, compared to the D1-PLT-SN, regardless of the presence of LPS. D5-PLT-SN significantly increased IL-12 production (P=0.024) compared to D1-PLT-SN. This response was again LPS-dependent. Conclusions These data demonstrate a novel two-event mechanism of monocyte inflammatory response that was dependent upon both the presence of date-of-expiry blood product SN and LPS. Further, these results demonstrate different cytokines responses induced by date-of-expiry PLT-SN and PRBC-SN. These data are consistent with the evidence from the ovine TRALI model, and enhancing its relevance to transfusion related changes in humans.
Resumo:
The formalin test is increasingly applied as a model of inflammatory pain using high formalin concentrations (5–15%). However, little is known about the effects of low formalin concentrations on related behavioural responses. To examine this, rat pups were subjected to various concentrations of formalin at four developmental stages: 7, 13, 22, and 82 days of age. At postnatal day (PND) 7, sex differences in flinching but not licking responses were observed with 0.5% formalin evoking higher flinching in males than in females. A dose response was evident in that 0.5% formalin also produced higher licking responses compared to 0.3% or 0.4% formalin. At PND 13, a concentration of 0.8% formalin evoked a biphasic response. At PND 22, a concentration of 1.1% evoked higher flinching and licking responses during the late phase (10–30 min) in both males and females. During the early phase (0–5 min), 1.1% evoked higher licking responses compared to 0.9% or 1% formalin. 1.1% formalin produced a biphasic response that was not evident with 0.9 or 1%. At PND 82, rats displayed a biphasic pattern in response to three formalin concentrations (1.25%, 1.75% and 2.25%) with the presence of an interphase for both 1.75% and 2.25% but not for 1.25%. These data suggest that low formalin concentrations induce fine-tuned responses that are not apparent with the high formalin concentration commonly used in the formalin test. These data also show that the developing nociceptive system is very sensitive to subtle changes in formalin concentrations.
Resumo:
Recently, it has been suggested osteocytes control the activities of bone formation (osteoblasts) and resorption (osteoclast), indicating their important regulatory role in bone remodelling. However, to date, the role of osteocytes in controlling bone vascularisation remains unknown. Our aim was to investigate the interaction between endothelial cells and osteocytes and to explore the possible molecular mechanisms during angiogenesis. To model osteocyte/endothelial cell interactions, we co-cultured osteocyte cell line (MLOY4) with endothelial cell line (HUVECs). Co-cultures were performed in 1:1 mixture of osteocytes and endothelial cells or by using the conditioned media (CM) transfer method. Real-time cell migration of HUVECs was measured with the transwell migration assay and xCELLigence system. Expression levels of angiogenesis- related genes were measured by quantitative real-time polymerase chain reaction (qRT-PCR). The effect of vascular endothelial growth factor (VEGF) and mitogen-activated phosphorylated kinase (MAPK) signaling were monitored by western blotting using relevant antibodies and inhibitors. During the bone formation, it was noted that osteocyte dendritic processes were closely connected to the blood vessels. The CM generated from MLOY4 cells-activated proliferation, migration, tube-like structure formation, and upregulation of angiogenic genes in endothelial cells suggesting that secretory factor(s) from osteocytes could be responsible for angiogenesis. Furthermore, we identified that VEGF secreted from MLOY4-activated VEGFR2–MAPK–ERK-signaling pathways in HUVECs. Inhibiting VEGF and/or MAPK–ERK pathways abrogated osteocyte-mediated angiogenesis in HUVEC cells. Our data suggest an important role of osteocytes in regulating angiogenesis.
Resumo:
INTRODUCTION There is evidence that the reduction of blood perfusion caused by closed soft tissue trauma (CSTT) delays the healing of the affected soft tissues and bone [1]. We hypothesise that the characterisation of vascular morphology changes (VMC) following injury allows us to determine the effect of the injury on tissue perfusion and thereby the severity of the injury. This research therefore aims to assess the VMC following CSTT in a rat model using contrast-enhanced micro-CT imaging. METHODOLOGY A reproducible CSTT was created on the left leg of anaesthetized rats (male, 12 weeks) with an impact device. After euthanizing the animals at 6 and 24 hours following trauma, the vasculature was perfused with a contrast agent (Microfil, Flowtech, USA). Both hind-limbs were dissected and imaged using micro-CT for qualitative comparison of the vascular morphology and quantification of the total vascular volume (VV). In addition, biopsy samples were taken from the CSTT region and scanned to compare morphological parameters of the vasculature between the injured and control limbs. RESULTS AND DISCUSSION While the visual observation of the hindlimb scans showed consistent perfusion of the microvasculature with microfil, enabling the identification of all major blood vessels, no clear differences in the vascular architecture were observed between injured and control limbs. However, overall VV within the region of interest (ROI)was measured to be higher for the injured limbs after 24h. Also, scans of biopsy samples demonstrated that vessel diameter and density were higher in the injured legs 24h after impact. CONCLUSION We believe these results will contribute to the development of objective diagnostic methods for CSTT based on changes to the microvascular morphology as well as aiding in the validation of future non-invasive clinical assessment modalities.
Resumo:
Microvessel density (MVD) is a widely used surrogate measure of angiogenesis in pathological specimens and tumour models. Measurement of MVD can be achieved by several methods. Automation of counting methods aims to increase the speed, reliability and reproducibility of these techniques. The image analysis system described here enables MVD measurement to be carried out with minimal expense in any reasonably equipped pathology department or laboratory. It is demonstrated that the system translates easily between tumour types which are suitably stained with minimal calibration. The aim of this paper is to offer this technique to a wider field of researchers in angiogenesis.
Resumo:
Introduction: Malignant pleural mesothelioma (MPM) is a rapidly fatal malignancy that is increasing in incidence. The caspase 8 inhibitor FLIP is an anti-apoptotic protein over-expressed in several cancer types including MPM. The histone deacetylase (HDAC) inhibitor Vorinostat (SAHA) is currently being evaluated in relapsed mesothelioma. We examined the roles of FLIP and caspase 8 in regulating SAHA-induced apoptosis in MPM. Methods: The mechanism of SAHA-induced apoptosis was assessed in 7 MPM cell lines and in a multicellular spheroid model. SiRNA and overexpression approaches were used, and cell death was assessed by flow cytometry, Western blotting and clonogenic assays. Results: RNAi-mediated FLIP silencing resulted in caspase 8-dependent apoptosis in MPM cell line models. SAHA potently down-regulated FLIP protein expression in all 7 MPM cell lines and in a multicellular spheroid model of MPM. In 6/7 MPM cell lines, SAHA treatment resulted in significant levels of apoptosis induction. Moreover, this apoptosis was caspase 8-dependent in all six sensitive cell lines. SAHA-induced apoptosis was also inhibited by stable FLIP overexpression. In contrast, down-regulation of HR23B, a candidate predictive biomarker for HDAC inhibitors, significantly inhibited SAHA-induced apoptosis in only 1/6 SAHA-sensitive MPM cell lines. Analysis of MPM patient samples demonstrated significant inter-patient variations in FLIP and caspase 8 expressions. In addition, SAHA enhanced cisplatin-induced apoptosis in a FLIP-dependent manner. Conclusions: These results indicate that FLIP is a major target for SAHA in MPM and identifies FLIP, caspase 8 and associated signalling molecules as candidate biomarkers for SAHA in this disease. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.