199 resultados para Rate equation model
Resumo:
A complex low-pressure argon discharge plasma containing dust grains is studied using a Boltzmann equation for the electrons and fluid equations for the ions. Local effects, such as the spatial distribution of the dust density and external electric field, are included, and their effect on the electron energy distribution, the electron and ion number densities, the electron temperature, and the dust charge are investigated. It is found that dust particles can strongly affect the plasma parameters by modifying the electron energy distribution, the electron temperature, the creation and loss of plasma particles, as well as the spatial distributions of the electrons and ions. In particular, for sufficiently high grain density and/or size, in a low-pressure argon glow discharge, the Druyvesteyn-like electron distribution in pristine plasmas can become nearly Maxwellian. Electron collection by the dust grains is the main cause for the change in the electron distribution function.
Resumo:
The aim of this paper is to determine the strain-rate-dependent mechanical behavior of living and fixed osteocytes and chondrocytes, in vitro. Firstly, Atomic Force Microscopy (AFM) was used to obtain the force-indentation curves of these single cells at four different strain-rates. These results were then employed in inverse finite element analysis (FEA) using Modified Standard neo-Hookean Solid (MSnHS) idealization of these cells to determine their mechanical properties. In addition, a FEA model with a newly developed spring element was employed to accurately simulate AFM evaluation in this study. We report that both cytoskeleton (CSK) and intracellular fluid govern the strain-rate-dependent mechanical property of living cells whereas intracellular fluid plays a predominant role on fixed cells’ behavior. In addition, through the comparisons, it can be concluded that osteocytes are stiffer than chondrocytes at all strain-rates tested indicating that the cells could be the biomarker of their tissue origin. Finally, we report that MSnHS is able to capture the strain-rate-dependent mechanical behavior of osteocyte and chondrocyte for both living and fixed cells. Therefore, we concluded that the MSnHS is a good model for exploration of mechanical deformation responses of single osteocytes and chondrocytes. This study could open a new avenue for analysis of mechanical behavior of osteocytes and chondrocytes as well as other similar types of cells.
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.
Resumo:
DNA methylation at promoter CpG islands (CGI) is an epigenetic modification associated with inappropriate gene silencing in multiple tumor types. In the absence of a human pituitary tumor cell line, small interfering RNA-mediated knockdown of the maintenance methyltransferase DNA methyltransferase (cytosine 5)-1 (Dnmt1) was used in the murine pituitary adenoma cell line AtT-20. Sustained knockdown induced reexpression of the fully methylated and normally imprinted gene neuronatin (Nnat) in a time-dependent manner. Combined bisulfite restriction analysis (COBRA) revealed that reexpression of Nnat was associated with partial CGI demethylation, which was also observed at the H19 differentially methylated region. Subsequent genome-wide microarray analysis identified 91 genes that were significantly differentially expressed in Dnmt1 knockdown cells (10% false discovery rate). The analysis showed that genes associated with the induction of apoptosis, signal transduction, and developmental processes were significantly overrepresented in this list (P < 0.05). Following validation by reverse transcription-PCR and detection of inappropriate CGI methylation by COBRA, four genes (ICAM1, NNAT, RUNX1, and S100A10) were analyzed in primary human pituitary tumors, each displaying significantly reduced mRNA levels relative to normal pituitary (P < 0.05). For two of these genes, NNAT and S100A10, decreased expression was associated with increased promoter CGI methylation. Induced expression of Nnat in stable transfected AtT-20 cells inhibited cell proliferation. To our knowledge, this is the first report of array-based "epigenetic unmasking" in combination with Dnmt1 knockdown and reveals the potential of this strategy toward identifying genes silenced by epigenetic mechanisms across species boundaries.
Resumo:
It is well known that, although a uniform magnetic field inhibits the onset of small amplitude thermal convection in a layer of fluid heated from below, isolated convection cells may persist if the fluid motion within them is sufficiently vigorous to expel magnetic flux. Such fully nonlinear(‘‘convecton’’) solutions for magnetoconvection have been investigated by several authors. Here we explore a model amplitude equation describing this separation of a fluid layer into a vigorously convecting part and a magnetically-dominated part at rest. Our analysis elucidates the origin of the scaling laws observed numerically to form the boundaries in parameter space of the region of existence of these localised states, and importantly, for the lowest thermal forcing required to sustain them.
Resumo:
A Monte Carlo model of an Elekta iViewGT amorphous silicon electronic portal imaging device (a-Si EPID) has been validated for pre-treatment verification of clinical IMRT treatment plans. The simulations involved the use of the BEAMnrc and DOSXYZnrc Monte Carlo codes to predict the response of the iViewGT a-Si EPID model. The predicted EPID images were compared to the measured images obtained from the experiment. The measured EPID images were obtained by delivering a photon beam from an Elekta Synergy linac to the Elekta iViewGT a-Si EPID. The a-Si EPID was used with no additional build-up material. Frame averaged EPID images were acquired and processed using in-house software. The agreement between the predicted and measured images was analyzed using the gamma analysis technique with acceptance criteria of 3% / 3 mm. The results show that the predicted EPID images for four clinical IMRT treatment plans have a good agreement with the measured EPID signal. Three prostate IMRT plans were found to have an average gamma pass rate of more than 95.0 % and a spinal IMRT plan has the average gamma pass rate of 94.3 %. During the period of performing this work a routine MLC calibration was performed and one of the IMRT treatments re-measured with the EPID. A change in the gamma pass rate for one field was observed. This was the motivation for a series of experiments to investigate the sensitivity of the method by introducing delivery errors, MLC position and dosimetric overshoot, into the simulated EPID images. The method was found to be sensitive to 1 mm leaf position errors and 10% overshoot errors.
Resumo:
Trade flows of commodities are generally affected by the principles of comparative advantage in a free trade. However, trade flows might be enhanced or distorted not only by various government interventions, but also by exchange rate fluctuations among others. This study applies a commodity-specific gravity model to selected vegetable trade flows among Organization for Economic Co-operation and Development (OECD) countries to determine the effects of exchange rate uncertainty on the trade flows. Using the data from 1996 to 2002, the results show that, while the exchange rate uncertainty significantly reduces trade in the majority of commodity flows, there is evidence that both short- and long-term volatility have positive effect on trade flows of specific commodities. This study also tests the regional preferential trade agreements such as the North American Free Trade Agreement (NAFTA), the Asia-Pacific Economic Cooperation (APEC) and the EU, and their different effects on commodities.
Resumo:
This paper provides a first look at the acceptance of Accountable-eHealth systems, a new genre of eHealth systems, designed to manage information privacy concerns that hinder the proliferation of eHealth. The underlying concept of AeH systems is appropriate use of information through after-the-fact accountability for intentional misuse of information by healthcare professionals. An online questionnaire survey was utilised for data collection from three educational institutions in Queensland, Australia. A total of 23 hypothesis relating to 9 constructs were tested using a structural equation modelling technique. A total of 334 valid responses were received. The cohort consisted of medical, nursing and other health related students studying at various levels in both undergraduate and postgraduate courses. The hypothesis testing disproved 7 hypotheses. The empirical research model developed was capable of predicting 47.3% of healthcare professionals’ perceived intention to use AeH systems. A validation of the model with a wider survey cohort would be useful to confirm the current findings.
Resumo:
Rating systems are used by many websites, which allow customers to rate available items according to their own experience. Subsequently, reputation models are used to aggregate available ratings in order to generate reputation scores for items. A problem with current reputation models is that they provide solutions to enhance accuracy of sparse datasets not thinking of their models performance over dense datasets. In this paper, we propose a novel reputation model to generate more accurate reputation scores for items using any dataset; whether it is dense or sparse. Our proposed model is described as a weighted average method, where the weights are generated using the normal distribution. Experiments show promising results for the proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.
Resumo:
Approximate Bayesian Computation’ (ABC) represents a powerful methodology for the analysis of complex stochastic systems for which the likelihood of the observed data under an arbitrary set of input parameters may be entirely intractable – the latter condition rendering useless the standard machinery of tractable likelihood-based, Bayesian statistical inference [e.g. conventional Markov chain Monte Carlo (MCMC) simulation]. In this paper, we demonstrate the potential of ABC for astronomical model analysis by application to a case study in the morphological transformation of high-redshift galaxies. To this end, we develop, first, a stochastic model for the competing processes of merging and secular evolution in the early Universe, and secondly, through an ABC-based comparison against the observed demographics of massive (Mgal > 1011 M⊙) galaxies (at 1.5 < z < 3) in the Cosmic Assembly Near-IR Deep Extragalatic Legacy Survey (CANDELS)/Extended Groth Strip (EGS) data set we derive posterior probability densities for the key parameters of this model. The ‘Sequential Monte Carlo’ implementation of ABC exhibited herein, featuring both a self-generating target sequence and self-refining MCMC kernel, is amongst the most efficient of contemporary approaches to this important statistical algorithm. We highlight as well through our chosen case study the value of careful summary statistic selection, and demonstrate two modern strategies for assessment and optimization in this regard. Ultimately, our ABC analysis of the high-redshift morphological mix returns tight constraints on the evolving merger rate in the early Universe and favours major merging (with disc survival or rapid reformation) over secular evolution as the mechanism most responsible for building up the first generation of bulges in early-type discs.
Resumo:
Objective. Leconotide (CVID, AM336, CNSB004) is an omega conopeptide similar to ziconotide, which blocks voltage sensitive calcium channels. However, unlike ziconotide, which must be administered intrathecally, leconotide can be given intravenously because it is less toxic. This study investigated the antihyperalgesic potency of leconotide given intravenously alone and in combinations with morphine-administered intraperitoneally, in a rat model of bone cancer pain. Design. Syngeneic rat prostate cancer cells AT3B-1 were injected into one tibia of male Wistar rats. The tumor expanded within the bone causing hyperalgesia to heat applied to the ipsilateral hind paw. Measurements were made of the maximum dose (MD) of morphine and leconotide given alone and in combinations that caused no effect in an open-field activity monitor, rotarod, and blood pressure and heart rate measurements. Paw withdrawal thresholds from noxious heat were measured. Dose response curves for morphine (0.312–5.0 mg/kg intraperitoneal) and leconotide (0.002–200 µg/kg intravenous) given alone were plotted and responses compared with those caused by morphine and leconotide in combinations. Results. Leconotide caused minimal antihyperalgesic effects when administered alone. Morphine given alone intraperitoneally caused dose-related antihyperalgesic effects (ED50 = 2.40 ± 1.24 mg/kg), which were increased by coadministration of leconotide 20 µg/kg (morphine ED50 = 0.16 ± 1.30 mg/kg); 0.2 µg/kg (morphine ED50 = 0.39 ± 1.27 mg/kg); and 0.02 µg/kg (morphine ED50 = 1.24 ± 1.30 mg/kg). Conclusions. Leconotide caused a significant increase in reversal by morphine of the bone cancer-induced hyperalgesia without increasing the side effect profile of either drug. Clinical Implication. Translation into clinical practice of the method of analgesia described here will improve the quantity and quality of analgesia in patients with bone metastases. The use of an ordinary parenteral route for administration of the calcium channel blocker (leconotide) at low dose opens up the technique to large numbers of patients who could not have an intrathecal catheter for drug administration. Furthermore, the potentiating synergistic effect with morphine on hyperalgesia without increased side effects will lead to greater analgesia with improved quality of life.
Resumo:
Solid-extracellular fluid interaction is believed to play an important role in the strain-rate dependent mechanical behaviors of shoulder articular cartilages. It is believed that the kangaroo shoulder joint is anatomically and biomechanically similar to human shoulder joint and it is easy to get in Australia. Therefore, the kangaroo humeral head cartilage was used as the suitable tissue for the study in this paper. Indentation tests from quasi-static (10-4/sec) to moderately high strain-rate (10-2/sec) on kangaroo humeral head cartilage tissues were conduced to investigate the strain-rate dependent behaviors. A finite element (FE) model was then developed, in which cartilage was conceptualized as a porous solid matrix filled with incompressible fluids. In this model, the solid matrix was modeled as an isotropic hyperelastic material and the percolating fluid follows Darcy’s law. Using inverse FE procedure, the constitutive parameters related to stiffness, compressibility of the solid matrix and permeability were obtained from the experimental results. The effect of solid-extracellular fluid interaction and drag force (the resistance to fluid movement) on strain-rate dependent behavior was investigated by comparing the influence of constant, strain dependent and strain-rate dependent permeability on FE model prediction. The newly developed porohyperelastic cartilage model with the inclusion of strain-rate dependent permeability was found to be able to predict the strain-rate dependent behaviors of cartilages.
Resumo:
Structural equation modeling (SEM) is a versatile multivariate statistical technique, and applications have been increasing since its introduction in the 1980s. This paper provides a critical review of 84 articles involving the use of SEM to address construction related problems over the period 1998–2012 including, but not limited to, seven top construction research journals. After conducting a yearly publication trend analysis, it is found that SEM applications have been accelerating over time. However, there are inconsistencies in the various recorded applications and several recurring problems exist. The important issues that need to be considered are examined in research design, model development and model evaluation and are discussed in detail with reference to current applications. A particularly important issue concerns the construct validity. Relevant topics for efficient research design also include longitudinal or cross-sectional studies, mediation and moderation effects, sample size issues and software selection. A guideline framework is provided to help future researchers in construction SEM applications.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.