626 resultados para loss, PBEE, PEER method, earthquake engineering
Resumo:
Objective: To use our Bayesian method of motor unit number estimation (MUNE) to evaluate lower motor neuron degeneration in ALS. Methods: In subjects with ALS we performed serial MUNE studies. We examined the repeatability of the test and then determined whether the loss of MUs was fitted by an exponential or Weibull distribution. Results: The decline in motor unit (MU) numbers was well-fitted by an exponential decay curve. We calculated the half life of MUs in the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and/or extensor digitorum brevis (EDB) muscles. The mean half life of the MUs of ADM muscle was greater than those of the APB or EDB muscles. The half-life of MUs was less in the ADM muscle of subjects with upper limb than in those with lower limb onset. Conclusions: The rate of loss of lower motor neurons in ALS is exponential, the motor units of the APB decay more quickly than those of the ADM muscle and the rate of loss of motor units is greater at the site of onset of disease. Significance: This shows that the Bayesian MUNE method is useful in following the course and exploring the clinical features of ALS. 2012 International Federation of Clinical Neurophysiology.
Resumo:
Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).
Resumo:
BACKGROUND There is little doubt that our engineering graduates’ ability to identify cultural differences and their potential to impact on engineering projects, and to work effectively with these differences is of key importance in the modern engineering practice. Within engineering degree programs themselves there is also a significant need to recognise the impact of changing student and staff profiles on what happens in the classroom. The research described in this paper forms part of a larger project exploring issues of intercultural competence in engineering. PURPOSE This paper presents an observational and survey study of undergraduate and postgraduate engineering students from four institutions working in groups on tasks with a purely technical focus, or with a cultural and humanitarian element. The study sought to explore how students rate their own intercultural competence and team process and whether any differences exist depending on the nature of the task they are working on. We also investigated whether any differences were evident between groups of first year, second year and postgraduate students. DESIGN/METHOD The study used the miniCQS instrument (Ang & Van Dyne, 2008) and a Bales Interaction Process Analysis based scale (Bales, 1950; Carney, 1976) to collect students self ratings of group process, task management, and cultural experience and behaviour. The Bales IPA was also used for coding video observations of students working in groups. Survey data were used to form descriptive variables to compare outcomes across the different tasks and contexts. Observations analysed in Nvivo were used to provide commentary and additional detail on the quantitative data. RESULTS The results of the survey indicated consistent mean scores on each survey item for each group of students, despite vastly different tasks, student backgrounds and educational contexts. Some small, statistically significant mean differences existed, offering some basic insights into how task and student group composition could affect self ratings. Overall though, the results suggest minimal shift in how students view group function and their intercultural experience, irrespective of differing educational experience. CONCLUSIONS The survey results, contrasted with group observations, indicate that either students are not translating their experience (in the group tasks) into critical self assessment of their cultural competence and teamwork, or that they become more critical of team performance and cultural competence as their competence in these areas grows, so their ratings remain consistent. Both outcomes indicate that students need more intensive guidance to build their critical self and peer assessment skills in these areas irrespective of their year level of study.
Resumo:
We consider the space fractional advection–dispersion equation, which is obtained from the classical advection–diffusion equation by replacing the spatial derivatives with a generalised derivative of fractional order. We derive a finite volume method that utilises fractionally-shifted Grünwald formulae for the discretisation of the fractional derivative, to numerically solve the equation on a finite domain with homogeneous Dirichlet boundary conditions. We prove that the method is stable and convergent when coupled with an implicit timestepping strategy. Results of numerical experiments are presented that support the theoretical analysis.
Resumo:
In this paper, a new comprehensive planning methodology is proposed for implementing distribution network reinforcement. The load growth, voltage profile, distribution line loss, and reliability are considered in this procedure. A time-segmentation technique is employed to reduce the computational load. Options considered range from supporting the load growth using the traditional approach of upgrading the conventional equipment in the distribution network, through to the use of dispatchable distributed generators (DDG). The objective function is composed of the construction cost, loss cost and reliability cost. As constraints, the bus voltages and the feeder currents should be maintained within the standard level. The DDG output power should not be less than a ratio of its rated power because of efficiency. A hybrid optimization method, called modified discrete particle swarm optimization, is employed to solve this nonlinear and discrete optimization problem. A comparison is performed between the optimized solution based on planning of capacitors along with tap-changing transformer and line upgrading and when DDGs are included in the optimization.
Resumo:
We have compared the effects of different sterilization techniques on the properties of Bombyx mori silk fibroin thin films with the view to subsequent use for corneal tissue engineering. The transparency, tensile properties, corneal epithelial cell attachment and degradation of the films were used to evaluate the suitability of certain sterilization techniques including gamma-irradiation (in air or nitrogen), steam treatment and immersion in aqueous ethanol. The investigations showed that gamma-irradiation, performed either in air or in a nitrogen atmosphere, did not significantly alter the properties of films. The films sterilized by gamma-irradiation or by immersion in ethanol had a transparency greater than 98% and tensile properties comparable to human cornea and amniotic membrane, the materials of choice in the reconstruction of ocular surface. Although steam-sterilization produced stronger, stiffer films, they were less transparent, and cell attachment was affected by the variable topography of these films. It was concluded that gamma-irradiation should be considered to be the most suitable method for the sterilization of silk fibroin films, however, the treatment with ethanol is also an acceptable method.
Resumo:
This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.
Resumo:
An analytical method for the detection of carbonaceous gases by a non-dispersive infrared sensor (NDIR) has been developed. The calibration plots of six carbonaceous gases including CO2, CH4, CO, C2H2, C2H4 and C2H6 were obtained and the reproducibility determined to verify the feasibility of this gas monitoring method. The results prove that squared correlation coefficients for the six gas measurements are greater than 0.999. The reproducibility is excellent, thus indicating that this analytical method is useful to determinate the concentrations of carbonaceous gases.
Resumo:
Background Loss of heterozygosity (LOH) is an important marker for one of the 'two-hits' required for tumor suppressor gene inactivation. Traditional methods for mapping LOH regions require the comparison of both tumor and patient-matched normal DNA samples. However, for many archival samples, patient-matched normal DNA is not available leading to the under-utilization of this important resource in LOH studies. Here we describe a new method for LOH analysis that relies on the genome-wide comparison of heterozygosity of single nucleotide polymorphisms (SNPs) between cohorts of cases and un-matched healthy control samples. Regions of LOH are defined by consistent decreases in heterozygosity across a genetic region in the case cohort compared to the control cohort. Methods DNA was collected from 20 Follicular Lymphoma (FL) tumor samples, 20 Diffuse Large B-cell Lymphoma (DLBCL) tumor samples, neoplastic B-cells of 10 B-cell Chronic Lymphocytic Leukemia (B-CLL) patients and Buccal cell samples matched to 4 of these B-CLL patients. The cohort heterozygosity comparison method was developed and validated using LOH derived in a small cohort of B-CLL by traditional comparisons of tumor and normal DNA samples, and compared to the only alternative method for LOH analysis without patient matched controls. LOH candidate regions were then generated for enlarged cohorts of B-CLL, FL and DLBCL samples using our cohort heterozygosity comparison method in order to evaluate potential LOH candidate regions in these non-Hodgkin's lymphoma tumor subtypes. Results Using a small cohort of B-CLL samples with patient-matched normal DNA we have validated the utility of this method and shown that it displays more accuracy and sensitivity in detecting LOH candidate regions compared to the only alternative method, the Hidden Markov Model (HMM) method. Subsequently, using B-CLL, FL and DLBCL tumor samples we have utilised cohort heterozygosity comparisons to localise LOH candidate regions in these subtypes of non-Hodgkin's lymphoma. Detected LOH regions included both previously described regions of LOH as well as novel genomic candidate regions. Conclusions We have proven the efficacy of the use of cohort heterozygosity comparisons for genome-wide mapping of LOH and shown it to be in many ways superior to the HMM method. Additionally, the use of this method to analyse SNP microarray data from 3 common forms of non-Hodgkin's lymphoma yielded interesting tumor suppressor gene candidates, including the ETV3 gene that was highlighted in both B-CLL and FL.
Resumo:
Railway Bridges deteriorate over time due to different critical factors including, flood, wind, earthquake, collision, and environment factors, such as corrosion, wear, termite attack, etc. In current practice, the contributions of the critical factors, towards the deterioration of railway bridges, which show their criticalities, are not appropriately taken into account. In this paper, a new method for quantifying the criticality of these factors will be introduced. The available knowledge as well as risk analyses conducted in different Australian standards and developed for bridge-design will be adopted. The analytic hierarchy process (AHP) is utilized for prioritising the factors. The method is used for synthetic rating of railway bridges developed by the authors of this paper. Enhancing the reliability of predicting the vulnerability of railway bridges to the critical factors, will be the significant achievement of this research.
Resumo:
Conditions of bridges deteriorate with age, due to different critical factors including, changes in loading, fatigue, environmental effects and natural events. In order to rate a network of bridges, based on their structural condition, the condition of the components of a bridge and their effects on behaviour of the bridge should be reliably estimated. In this paper, a new method for quantifying the criticality and vulnerability of the components of the railway bridges in a network will be introduced. The type of structural analyses for identifying the criticality of the components for carrying train loads will be determined. In addition to that, the analytical methods for identifying the vulnerability of the components to natural events whose probability of occurrence is important, such as, flood, wind, earthquake and collision will be determined. In order to maintain the practicality of this method to be applied to a network of thousands of railway bridges, the simplicity of structural analysis has been taken into account. Demand by capacity ratios of the components at both safety and serviceability condition states as well as weighting factors used in current bridge management systems (BMS) are taken into consideration. It will be explained what types of information related to the structural condition of a bridge is required to be obtained, recorded and analysed. The authors of this paper will use this method in a new rating system introduced previously. Enhancing accuracy and reliability of evaluating and predicting the vulnerability of railway bridges to environmental effects and natural events will be the significant achievement of this research.
Resumo:
This study presented a novel method for purification of three different grades of diatomite from China by scrubbing technique using sodiumhexametaphosphate (SHMP) as dispersant combinedwith centrifugation. Effects of pH value and dispersant amount on the grade of purified diatomitewere studied and the optimumexperimental conditions were obtained. The characterizations of original diatomite and derived products after purification were determined by scanning electron microscopy (SEM), X-ray diffraction (XRD), infrared spectroscopy (IR) and specific surface area analyzer (BET). The results indicated that the pore size distribution, impurity content and bulk density of purified diatomite were improved significantly. The dispersive effect of pH and SHMP on the separation of diatomite from clay minerals was discussed systematically through zeta potential test. Additionally, a possible purification mechanism was proposed in the light of the obtained experimental results.
Resumo:
The numerical solution in one space dimension of advection--reaction--diffusion systems with nonlinear source terms may invoke a high computational cost when the presently available methods are used. Numerous examples of finite volume schemes with high order spatial discretisations together with various techniques for the approximation of the advection term can be found in the literature. Almost all such techniques result in a nonlinear system of equations as a consequence of the finite volume discretisation especially when there are nonlinear source terms in the associated partial differential equation models. This work introduces a new technique that avoids having such nonlinear systems of equations generated by the spatial discretisation process when nonlinear source terms in the model equations can be expanded in positive powers of the dependent function of interest. The basis of this method is a new linearisation technique for the temporal integration of the nonlinear source terms as a supplementation of a more typical finite volume method. The resulting linear system of equations is shown to be both accurate and significantly faster than methods that necessitate the use of solvers for nonlinear system of equations.
Resumo:
The experiences of the loss reduction projects in electric power distribution companies (EPDCs) of Iran are presented. The loss reduction methods, which are proposed individually by 14 EPDCs, corresponding energy saving (ES), Investment costs (IC), and loss rate reductions are provided. In order to illustrate the effectiveness and performance of the loss reduction methods, three parameters are proposed as energy saving per investment costs (ESIC), energy saving per quantity (ESPQ), and investment costs per quantity (ICPQ). The overall ESIC of 14 EPDC as well as individual average and standard deviation of the EISC for each method is presented and compared. In addition, the average and standard deviation of the ESPQs and ICPQs for the loss reduction methods, individually, are provided and investigated. These parameters are useful for EPDCs that intend to reduce the electric losses in distribution networks as a benchmark and as a background in the planning purposes.
Resumo:
Increasing the importance and use of infrastructures such as bridges, demands more effective structural health monitoring (SHM) systems. SHM has well addressed the damage detection issues through several methods such as modal strain energy (MSE). Many of the available MSE methods either have been validated for limited type of structures such as beams or their performance is not satisfactory. Therefore, it requires a further improvement and validation of them for different types of structures. In this study, an MSE method was mathematically improved to precisely quantify the structural damage at an early stage of formation. Initially, the MSE equation was accurately formulated considering the damaged stiffness and then it was used for derivation of a more accurate sensitivity matrix. Verification of the improved method was done through two plane structures: a steel truss bridge and a concrete frame bridge models that demonstrate the framework of a short- and medium-span of bridge samples. Two damage scenarios including single- and multiple-damage were considered to occur in each structure. Then, for each structure, both intact and damaged, modal analysis was performed using STRAND7. Effects of up to 5 per cent noise were also comprised. The simulated mode shapes and natural frequencies derived were then imported to a MATLAB code. The results indicate that the improved method converges fast and performs well in agreement with numerical assumptions with few computational cycles. In presence of some noise level, it performs quite well too. The findings of this study can be numerically extended to 2D infrastructures particularly short- and medium-span bridges to detect the damage and quantify it more accurately. The method is capable of providing a proper SHM that facilitates timely maintenance of bridges to minimise the possible loss of lives and properties.