849 resultados para Propagation prediction models
Resumo:
We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy.
Resumo:
本文研究了木本植物的不同部位即叶片、枝条和树皮以及植物的生理指标如气孔阻力对大气S02、TSP和重金属污染的指示和监测作用,并用树木年轮指示大气污染的历史和程度。结果认为: 承德市大气污染自1703年城市化以来开始出现,但达到严重污染水平则出现在本世纪50年代以来尤其是最近10-20年城市化与工业化的加剧,主要污染物以S02为主,从避暑山庄修建前的<0.1μg m-3达到目前的30μg m-3,重金属污染Fe自1927-45大庙铁矿开采后出现,Mn、Ni、Pb等出现在工业化以来的最近40-50年中,上述污染物含量在木质部年轮中明显升高,如S增加了10倍以上,Pb增加了560% (P<0.00l)。 不同城市功能区树皮pH和气孔阻力不同,主要与大气中的S02和TSP有关,据此可监测大气S02和TSP污染。前者以榆树、加拿大杨、垂柳和国槐最佳,相关系数分别可达-0.8384 (P<0.0l),-0.7447、-0.6904和-0.6552 (P<0.05);后者则以白腊和旱柳下表皮最好,相关系数达0.9968和0.9951 (P<0.00l)。在扫描电镜下发现气孔受大气TSP影响出现不同程度的堵塞现象,主要有2种途径,小型颗粒物(<5μm)进入气孔腔,大型颗粒物(>30μm)可将气孔封盖。 植物不同器官部位污染物含量以树皮为最高,其次是枝条或叶,因而适宜的指示或监测部位是叶或枝条。主分量分析认为:承德市大气污染物以S为主,重金属Fe、Zn、Mn也有一定的贡献,Pb仅出现在繁忙道路区。不同季节污染物含量变化以休眠期最高,生长初期次之,生长旺盛期最低,如S和Pb分别从0.75 mg g-1和0.7 mg g-1上升到1.5 mg g-1和2.0 mg g-1(P<0.001)。植物不同季节污染物含量的变化反应了大气污染物季节变化特点,因而可以指示或监测大气污染尤其是S02污染。其中刺槐多部位复相关模型监测效果最佳,复相关系数可达0.987;某些植物单一部位的监测作用也较好,叶以珍珠梅最佳,相关系数为0.8695 (P<0.001),枝以油松、珍珠梅、垂柳为好(r≥0.8,P <0.001),树皮以刺槐为佳,r=0.8615 (P<0.0l)。植物不同部位的污染物含量还可用来评价大气环境质量,其中复合污染指数可以 评价总的大气环境质量,S污染指数和重金属污染指数可以评价S02、重金属和TSP污染,与直接利用污染物浓度法基本一致。油松不同部位对于大气S02的指示作用可表现为年轮对大气污染历史的指示或监测,针叶对现状S02污染的预测,并利用针叶对于S02的监测结果,绘制了大气S02污染分布图。 总之,本文利用古松年轮和现状城市植物的枝条、叶和树皮中的污染物含量以及树皮酸度等不同方面的指标,对承德市大气污染的历史和现状进行了指示与监测,即承德市大气污染从过去到现在均以S02为主,植物不同部位可以非常有效地进行大气S02污染的监测与评价,其中多部位的复相关模型预测效果极佳。另外,由植物监测而绘制的大气S02分布图,较准确地揭示了承德市大气S02现状分布规律。
Resumo:
This paper presents a pseudo-time-step method to calculate a (vector) Green function for the adjoint linearised Euler equations as a scattering problem in the frequency domain, for use as a jet-noise propagation prediction tool. A method of selecting the acoustics-related solution in a truncated spatial domain while suppressing any possible shear-layer-type instability is presented. Numerical tests for 3-D axisymmetrical parallel mean flows against semi-analytical reference solutions indicate that the new iterative algorithm is capable of producing accurate solutions with modest computational requirements.
Resumo:
Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd. Summary A field programmable gate array (FPGA) based model predictive controller for two phases of spacecraft rendezvous is presented. Linear time-varying prediction models are used to accommodate elliptical orbits, and a variable prediction horizon is used to facilitate finite time completion of the longer range manoeuvres, whilst a fixed and receding prediction horizon is used for fine-grained tracking at close range. The resulting constrained optimisation problems are solved using a primal-dual interior point algorithm. The majority of the computational demand is in solving a system of simultaneous linear equations at each iteration of this algorithm. To accelerate these operations, a custom circuit is implemented, using a combination of Mathworks HDL Coder and Xilinx System Generator for DSP, and used as a peripheral to a MicroBlaze soft-core processor on the FPGA, on which the remainder of the system is implemented. Certain logic that can be hard-coded for fixed sized problems is implemented to be configurable online, in order to accommodate the varying problem sizes associated with the variable prediction horizon. The system is demonstrated in closed-loop by linking the FPGA with a simulation of the spacecraft dynamics running in Simulink on a PC, using Ethernet. Timing comparisons indicate that the custom implementation is substantially faster than pure embedded software-based interior point methods running on the same MicroBlaze and could be competitive with a pure custom hardware implementation.
Resumo:
对于坡面细沟与细沟间侵蚀过程的了解是建立侵蚀预报模型的基础,但传统方法难以对其进行深入研究。利用7Be示踪技术并结合人工模拟降雨,考虑坡脚沉积作用,研究了25°坡耕地径流小区次降雨过程中细沟与细沟间侵蚀动态。结果表明:根据流出径流小区泥沙7Be含量变化计算坡面明显细沟出现时间,由于坡脚沉积作用使得A、B两试验小区这一时间比实际细沟出现分别延迟了45min和11min;根据坡面-侵蚀泥沙中7Be总量守恒和泥沙质量平衡原理,坡面细沟间侵蚀及细沟侵蚀在坡面总侵蚀、坡脚沉积区泥沙及流出径流小区泥沙中的比例被定量区分开;总体上,细沟间侵蚀量在径流泥沙中的比例逐渐减少,而细沟侵蚀量逐渐增加。两试验小区中7Be示踪计算坡面细沟侵蚀量和坡脚沉积量与实测值相比相对误差均较小,因此7Be示踪技术可以对土壤侵蚀进行较为准确地定量研究。
Resumo:
介绍了国内外土壤侵蚀预报模型的主要研究成果。所介绍的国外土壤侵蚀预报模型除众所周知的USL E/ RUSL E,WEPP,L ISEM和 EUROSEM外 ,还有浅沟侵蚀预报模型 (EGEM)和切沟侵蚀预报模型。国内的侵蚀预报模型主要有在 GIS支持下的陡坡地包括浅沟侵蚀的坡面侵蚀预报模型、有一定物理成因的坡面侵蚀预报模型和流域预报模型。在总结和评价国内外土壤侵蚀预报模型的基础上 ,提出了中国今后土壤侵蚀预报模型研究的设想。
Resumo:
Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.
Resumo:
Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
In-hospital worsening heart failure represents a clinical scenario wherein a patient hospitalized for acute heart failure experiences a worsening of their condition, requiring escalation of therapy. Worsening heart failure is associated with worse in-hospital and postdischarge outcomes. Worsening heart failure is increasingly being used as an endpoint or combined endpoint in clinical trials, as it is unique to episodes of acute heart failure and captures an important event during the inpatient course. While prediction models have been developed to identify worsening heart failure, there are no known FDA-approved medications associated with decreased worsening heart failure. Continued study is warranted.
Resumo:
Background: A number of factors are known to influence food preferences and acceptability of new products. These include their sensory characteristics and strong, innate neural influences. In designing foods for any target group, it is important to consider intrinsic and extrinsic characteristics which may contribute to palatability, and acceptability of foods. Objective: To assess age and gender influences on sensory perceptions of novel low cost nutrient-rich food products developed using traditional Ghanaian food ingredients. Materials and Methods: In this study, a range of food products were developed from Ghanaian traditional food sources using the Food Multimix (FMM) concept. These products were subjected to sensory evaluation to assess the role of sensory perception on their acceptability among different target age groups across the life cycle (aged 11-68 years olds) and to ascertain any possible influences of gender on preference and choice. Variables including taste, odour, texture, flavour and appearance were tested and the results captured on a Likert scale and scores of likeness and acceptability analysed. Multivariate analyses were used to develop prediction models for targeted recipe development for different target groups. Multiple factor analysis of variance (ANOVA) and logistic linear regression were employed to test the strength of acceptability and to ascertain age and gender influences on product preference. Results: The results showed a positive trend in acceptability (r = 0.602) which tended towards statistical significance (p = 0.065) with very high product favourability rating (91% acceptability; P=0.005). However, age [odds ratios=1.44 (11-15 years old) odds ratios=2.01 (18-68 years old) and gender (P=0.000)] were major influences on product preference with children and females (irrespective of age) showing clear preferences or dislike of products containing certain particular ingredients. Conclusion: These findings are potentially useful in planning recipes for feeding interventions involving different vulnerable and target groups.
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.
Resumo:
The global prevalence of diabetic nephropathy is rising in parallel with the increasing incidence of diabetes in most countries. Unfortunately, up to 40 % of persons diagnosed with diabetes may develop kidney complications. Diabetic nephropathy is associated with substantially increased risks of cardiovascular disease and premature mortality. An inherited susceptibility to diabetic nephropathy exists, and progress is being made unravelling the genetic basis for nephropathy thanks to international research collaborations, shared biological resources and new analytical approaches. Multiple epidemiological studies have highlighted the clinical heterogeneity of nephropathy and the need for better phenotyping to help define important subgroups for analysis and increase the power of genetic studies. Collaborative genome-wide association studies for nephropathy have reported unique genes, highlighted novel biological pathways and suggested new disease mechanisms, but progress towards clinically relevant risk prediction models for diabetic nephropathy has been slow. This review summarises the current status, recent developments and ongoing challenges elucidating the genetics of diabetic nephropathy.
Resumo:
The European Eye Epidemiology (E3) consortium is a recently formed consortium of 29 groups from 12 European countries. It already comprises 21 population-based studies and 20 other studies (case-control, cases only, randomized trials), providing ophthalmological data on approximately 170,000 European participants. The aim of the consortium is to promote and sustain collaboration and sharing of data and knowledge in the field of ophthalmic epidemiology in Europe, with particular focus on the harmonization of methods for future research, estimation and projection of frequency and impact of visual outcomes in European populations (including temporal trends and European subregions), identification of risk factors and pathways for eye diseases (lifestyle, vascular and metabolic factors, genetics, epigenetics and biomarkers) and development and validation of prediction models for eye diseases. Coordinating these existing data will allow a detailed study of the risk factors and consequences of eye diseases and visual impairment, including study of international geographical variation which is not possible in individual studies. It is expected that collaborative work on these existing data will provide additional knowledge, despite the fact that the risk factors and the methods for collecting them differ somewhat among the participating studies. Most studies also include biobanks of various biological samples, which will enable identification of biomarkers to detect and predict occurrence and progression of eye diseases. This article outlines the rationale of the consortium, its design and presents a summary of the methodology.