146 resultados para Process control - Statistical methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Loss of heterozygosity (LOH) is an important marker for one of the 'two-hits' required for tumor suppressor gene inactivation. Traditional methods for mapping LOH regions require the comparison of both tumor and patient-matched normal DNA samples. However, for many archival samples, patient-matched normal DNA is not available leading to the under-utilization of this important resource in LOH studies. Here we describe a new method for LOH analysis that relies on the genome-wide comparison of heterozygosity of single nucleotide polymorphisms (SNPs) between cohorts of cases and un-matched healthy control samples. Regions of LOH are defined by consistent decreases in heterozygosity across a genetic region in the case cohort compared to the control cohort. Methods DNA was collected from 20 Follicular Lymphoma (FL) tumor samples, 20 Diffuse Large B-cell Lymphoma (DLBCL) tumor samples, neoplastic B-cells of 10 B-cell Chronic Lymphocytic Leukemia (B-CLL) patients and Buccal cell samples matched to 4 of these B-CLL patients. The cohort heterozygosity comparison method was developed and validated using LOH derived in a small cohort of B-CLL by traditional comparisons of tumor and normal DNA samples, and compared to the only alternative method for LOH analysis without patient matched controls. LOH candidate regions were then generated for enlarged cohorts of B-CLL, FL and DLBCL samples using our cohort heterozygosity comparison method in order to evaluate potential LOH candidate regions in these non-Hodgkin's lymphoma tumor subtypes. Results Using a small cohort of B-CLL samples with patient-matched normal DNA we have validated the utility of this method and shown that it displays more accuracy and sensitivity in detecting LOH candidate regions compared to the only alternative method, the Hidden Markov Model (HMM) method. Subsequently, using B-CLL, FL and DLBCL tumor samples we have utilised cohort heterozygosity comparisons to localise LOH candidate regions in these subtypes of non-Hodgkin's lymphoma. Detected LOH regions included both previously described regions of LOH as well as novel genomic candidate regions. Conclusions We have proven the efficacy of the use of cohort heterozygosity comparisons for genome-wide mapping of LOH and shown it to be in many ways superior to the HMM method. Additionally, the use of this method to analyse SNP microarray data from 3 common forms of non-Hodgkin's lymphoma yielded interesting tumor suppressor gene candidates, including the ETV3 gene that was highlighted in both B-CLL and FL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To analyze the epidemiological trend of hepatitis B from 1990 to 2007 in Shandong province, and to find the high risk population so as to explore the further control strategy. Methods Based on the routine reporting incidence data of hepatitis B and demographic data of Shandong province, the incidence rates and sex - specific, age - specific incidence rates of hepatitis B were calculated and statistically analyzed in the simple linear regression model. Results The total number of hepatitis B was 437 094, the annual average morbidity was 27132 per 100 000 population during 1990 to 2007. The incidence of men (38142 per 100 000) was higher than that for women (15183 per 100 000) 1The annual incidence rate of hepatitis B indicated an increasing trend for the whole population, while a decreased trend for the 0~9 year - old children p resented in the past 18 years. It showed that the average age of onset moved to the older. Conclusion Young adult men are the high-risk groups for the onset of hepatitis B. For the prevention of hepatitis B, the immunization of hepatitis B vaccine should be enhanced for other groups, especially for the high - risk population on the basis of imp roving the immunization coverage rate for newborns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microwave power is used for heating and drying processes because of its faster and volumetric heating capability. Non-uniform temperature distribution during microwave application is a major drawback of these processes. Intermittent application of microwave potentially reduces the impact of non-uniformity and improves energy efficiency by redistributing the temperature. However, temperature re-distribution during intermittent microwave heating has not been investigated adequately. Consequently, in this study, a coupled electromagnetic with heat and mass transfer model was developed using the finite element method embedded in COMSOL-Multyphysics software. Particularly, the temperature redistribution due to intermittent heating was investigated. A series of experiments were performed to validate the simulation. The test specimen was an apple and the temperature distribution was closely monitored by a TIC (Thermal Imaging Camera). The simulated temperature profile matched closely with thermal images obtained from experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work is to develop a demand-side-response model, which assists electricity consumers exposed to the market price to independently and proactively manage air-conditioning peak electricity demand. The main contribution of this research is to show how consumers can optimize the energy cost caused by the air conditioning load considering to several cases e.g. normal price, spike price, and the probability of a price spike case. This model also investigated how air-conditioning applies a pre-cooling method when there is a substantial risk of a price spike. The results indicate the potential of the scheme to achieve financial benefits for consumers and target the best economic performance for electrical generation distribution and transmission. The model was tested with Queensland electricity market data from the Australian Energy Market Operator and Brisbane temperature data from the Bureau of Statistics regarding hot days from 2011 to 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-level relationPopper dimension—( Exclusion dimension—( VC dimension—( between Karl Popper’s ideas on “falsifiability of scientific theories” and the notion of “overfitting”Overfitting in statistical learning theory can be easily traced. However, it was pointed out that at the level of technical details the two concepts are significantly different. One possible explanation that we suggest is that the process of falsification is an active process, whereas statistical learning theory is mainly concerned with supervised learningSupervised learning, which is a passive process of learning from examples arriving from a stationary distribution. We show that concepts that are closer (although still distant) to Karl Popper’s definitions of falsifiability can be found in the domain of learning using membership queries, and derive relations between Popper’s dimension, exclusion dimension, and the VC-dimensionVC dimension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose This Study evaluated the predictive validity of three previously published ActiGraph energy expenditure (EE) prediction equations developed for children and adolescents. Methods A total of 45 healthy children and adolescents (mean age: 13.7 +/- 2.6 yr) completed four 5-min activity trials (normal walking. brisk walking, easy running, and fast running) in ail indoor exercise facility. During each trial, participants were all ActiGraph accelerometer oil the right hip. EE was monitored breath by breath using the Cosmed K4b(2) portable indirect calorimetry system. Differences and associations between measured and predicted EE were assessed using dependent t-tests and Pearson correlations, respectively. Classification accuracy was assessed using percent agreement, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve. Results None of the equations accurately predicted mean energy expenditure during each of the four activity trials. Each equation, however, accurately predicted mean EE in at least one activity trial. The Puyau equation accurately predicted EE during slow walking. The Trost equation accurately predicted EE during slow running. The Freedson equation accurately predicted EE during fast running. None of the three equations accurately predicted EE during brisk walking. The equations exhibited fair to excellent classification accuracy with respect to activity intensity. with the Trost equation exhibiting the highest classification accuracy and the Puyau equation exhibiting the lowest. Conclusions These data suggest that the three accelerometer prediction equations do not accurately predict EE on a minute-by-minute basis in children and adolescents during overground walking and running. The equations maybe, however, for estimating participation in moderate and vigorous activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOST PAN stages in Australian factories use only five or six batch pans for the high grade massecuite production and operate these in a fairly rigid repeating production schedule. It is common that some of the pans are of large dropping capacity e.g. 150 to 240 t. Because of the relatively small number and large sizes of the pans, steam consumption varies widely through the schedule, often by ±30% about the mean value. Large fluctuations in steam consumption have implications for the steam generation/condensate management of the factory and the evaporators when bleed vapour is used. One of the objectives of a project to develop a supervisory control system for a pan stage is to (a) reduce the average steam consumption and (b) reduce the variation in the steam consumption. The operation of each of the high grade pans within the schedule at Macknade Mill was analysed to determine the idle (or buffer) time, time allocations for essential but unproductive operations (e.g. pan turn round, charging, slow ramping up of steam rates on pan start etc.), and productive time i.e. the time during boil-on of liquor and molasses feed. Empirical models were developed for each high grade pan on the stage to define the interdependence of the production rate and the evaporation rate for the different phases of each pan’s cycle. The data were analysed in a spreadsheet model to try to reduce and smooth the total steam consumption. This paper reports on the methodology developed in the model and the results of the investigations for the pan stage at Macknade Mill. It was found that the operation of the schedule severely restricted the ability to reduce the average steam consumption and smooth the steam flows. While longer cycle times provide increased flexibility the steam consumption profile was changed only slightly. The ability to cut massecuite on the run among pans, or the use of a high grade seed vessel, would assist in reducing the average steam consumption and the magnitude of the variations in steam flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents the details of the numerical model used in simulation of self-organization of nano-islands on solid surfaces in plasma-assisted assembly of quantum dot structures. The model includes the near-substrate non-neutral layer (plasma sheath) and a nanostructured solid deposition surface and accounts for the incoming flux of and energy of ions from the plasma, surface temperature-controlled adatom migration about the surface, adatom collisions with other adatoms and nano-islands, adatom inflow to the growing nano-islands from the plasma and from the two-dimensional vapour on the surface, and particle evaporation to the ambient space and the two-dimensional vapour. The differences in surface concentrations of adatoms in different areas within the quantum dot pattern significantly affect the self-organization of the nano-islands. The model allows one to formulate the conditions when certain islands grow, and certain ones shrink or even dissolve and relate them to the process control parameters. Surface coverage by selforganized quantum dots obtained from numerical simulation appears to be in reasonable agreement with the available experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of ion current density on the thickness of coatings deposited in a vacuum arc setup has been investigated to optimize the coating porosity. A planar probe was used to measure the ion current density distribution across plasma flux. A current density from 20 to 50 A/m2 was obtained, depending on the probe position relative to the substrate center. TiN coatings were deposited onto the cutting inserts placed at different locations on the substrate, and SEM was used to characterize the surfaces of the coatings. It was found that lowdensity coatings were formed at the decreased ion current density. A quantitative dependence of the coating thickness on the ion current density in the range of 20-50 A/m2 were obtained for the films deposited at substrate bias of 200 V and nitrogen pressure 0.1 Pa, and the coating porosity was calculated. The coated cutting inserts were tested by lathe machining of the martensitic stainless steel AISI 431. The results may be useful for controlling ion flux distribution over large industrial-scale substrates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both environmental economists and policy makers have shown a great deal of interest in the effect of pollution abatement on environmental efficiency. In line with the modern resources available, however, no contribution is brought to the environmental economics field with the Markov chain Monte Carlo (MCMC) application, which enables simulation from a distribution of a Markov chain and simulating from the chain until it approaches equilibrium. The probability density functions gained prominence with the advantages over classical statistical methods in its simultaneous inference and incorporation of any prior information on all model parameters. This paper concentrated on this point with the application of MCMC to the database of China, the largest developing country with rapid economic growth and serious environmental pollution in recent years. The variables cover the economic output and pollution abatement cost from the year 1992 to 2003. We test the causal direction between pollution abatement cost and environmental efficiency with MCMC simulation. We found that the pollution abatement cost causes an increase in environmental efficiency through the algorithm application, which makes it conceivable that the environmental policy makers should make more substantial measures to reduce pollution in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents mathematical models to simulate coupled heat and mass transfer during convective drying of food materials using three different effective diffusivities: shrinkage dependent, temperature dependent and average of those two. Engineering simulation software COMSOL Multiphysics was utilized to simulate the model in 2D and 3D. The simulation results were compared with experimental data. It is found that the temperature dependent effective diffusivity model predicts the moisture content more accurately at the initial stage of the drying, whereas, the shrinkage dependent effective diffusivity model is better for the final stage of the drying. The model with shrinkage dependent effective diffusivity shows evaporative cooling phenomena at the initial stage of drying. This phenomenon was investigated and explained. Three dimensional temperature and moisture profiles show that even when the surface is dry, inside of the sample may still contain large amount of moisture. Therefore, drying process should be carefully dealt with otherwise microbial spoilage may start from the centre of the ‘dried’ food. A parametric investigation has been conducted after the validation of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Built environment interventions designed to reduce non-communicable diseases and health inequity, complement urban planning agendas focused on creating more ‘liveable’, compact, pedestrian-friendly, less automobile dependent and more socially inclusive cities.However, what constitutes a ‘liveable’ community is not well defined. Moreover, there appears to be a gap between the concept and delivery of ‘liveable’ communities. The recently funded NHMRC Centre of Research Excellence (CRE) in Healthy Liveable Communities established in early 2014, has defined ‘liveability’ from a social determinants of health perspective. Using purpose-designed multilevel longitudinal data sets, it addresses five themes that address key evidence-base gaps for building healthy and liveable communities. The CRE in Healthy Liveable Communities seeks to generate and exchange new knowledge about: 1) measurement of policy-relevant built environment features associated with leading non-communicable disease risk factors (physical activity, obesity) and health outcomes (cardiovascular disease, diabetes) and mental health; 2) causal relationships and thresholds for built environment interventions using data from longitudinal studies and natural experiments; 3) thresholds for built environment interventions; 4) economic benefits of built environment interventions designed to influence health and wellbeing outcomes; and 5) factors, tools, and interventions that facilitate the translation of research into policy and practice. This evidence is critical to inform future policy and practice in health, land use, and transport planning. Moreover, to ensure policy-relevance and facilitate research translation, the CRE in Healthy Liveable Communities builds upon ongoing, and has established new, multi-sector collaborations with national and state policy-makers and practitioners. The symposium will commence with a brief introduction to embed the research within an Australian health and urban planning context, as well as providing an overall outline of the CRE in Healthy Liveable Communities, its structure and team. Next, an overview of the five research themes will be presented. Following these presentations, the Discussant will consider the implications of the research and opportunities for translation and knowledge exchange. Theme 2 will establish whether and to what extent the neighbourhood environment (built and social) is causally related to physical and mental health and associated behaviours and risk factors. In particular, research conducted as part of this theme will use data from large-scale, longitudinal-multilevel studies (HABITAT, RESIDE, AusDiab) to examine relationships that meet causality criteria via statistical methods such as longitudinal mixed-effect and fixed-effect models, multilevel and structural equation models; analyse data on residential preferences to investigate confounding due to neighbourhood self-selection and to use measurement and analysis tools such as propensity score matching and ‘within-person’ change modelling to address confounding; analyse data about individual-level factors that might confound, mediate or modify relationships between the neighbourhood environment and health and well-being (e.g., psychosocial factors, knowledge, perceptions, attitudes, functional status), and; analyse data on both objective neighbourhood characteristics and residents’ perceptions of these objective features to more accurately assess the relative contribution of objective and perceptual factors to outcomes such as health and well-being, physical activity, active transport, obesity, and sedentary behaviour. At the completion of the Theme 2, we will have demonstrated and applied statistical methods appropriate for determining causality and generated evidence about causal relationships between the neighbourhood environment, health, and related outcomes. This will provide planners and policy makers with a more robust (valid and reliable) basis on which to design healthy communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.