324 resultados para spine modeling
Resumo:
Large sized power transformers are important parts of the power supply chain. These very critical networks of engineering assets are an essential base of a nation’s energy resource infrastructure. This research identifies the key factors influencing transformer normal operating conditions and predicts the asset management lifespan. Engineering asset research has developed few lifespan forecasting methods combining real-time monitoring solutions for transformer maintenance and replacement. Utilizing the rich data source from a remote terminal unit (RTU) system for sensor-data driven analysis, this research develops an innovative real-time lifespan forecasting approach applying logistic regression based on the Weibull distribution. The methodology and the implementation prototype are verified using a data series from 161 kV transformers to evaluate the efficiency and accuracy for energy sector applications. The asset stakeholders and suppliers significantly benefit from the real-time power transformer lifespan evaluation for maintenance and replacement decision support.
Resumo:
Background The Spine Functional Index (SFI) is a recently published, robust and clinimetrically valid patient reported outcome measure. Objectives The purpose of this study was the adaptation and validation of a Spanish-version (SFI-Sp) with cultural and linguistic equivalence. Methods A two stage observational study was conducted. The SFI was cross-culturally adapted to Spanish through double forward and backward translation then validated for its psychometric characteristics. Participants (n = 226) with various spine conditions of >12 weeks duration completed the SFI-Sp and a region specific measure: for the back, the Roland Morris Questionnaire (RMQ) and Backache Index (BADIX); for the neck, the Neck Disability Index (NDI); for general health the EQ-5D and SF-12. The full sample was employed to determine internal consistency, concurrent criterion validity by region and health, construct validity and factor structure. A subgroup (n = 51) was used to determine reliability at seven days. Results The SFI-Sp demonstrated high internal consistency (α = 0.85) and reliability (r = 0.96). The factor structure was one-dimensional and supported construct validity. Criterion specific validity for function was high with the RMQ (r = 0.79), moderate with the BADIX (r = 0.59) and low with the NDI (r = 0.46). For general health it was low with the EQ-5D and inversely correlated (r = −0.42) and fair with the Physical and Mental Components of the SF-12 and inversely correlated (r = −0.56 and r = −0.48), respectively. The study limitations included the lack of longitudinal data regarding other psychometric properties, specifically responsiveness. Conclusions The SFI-Sp was demonstrated as a valid and reliable spine-regional outcome measure. The psychometric properties were comparable to and supported those of the English-version, however further longitudinal investigations are required.
Resumo:
Statistical comparison of oil samples is an integral part of oil spill identification, which deals with the process of linking an oil spill with its source of origin. In current practice, a frequentist hypothesis test is often used to evaluate evidence in support of a match between a spill and a source sample. As frequentist tests are only able to evaluate evidence against a hypothesis but not in support of it, we argue that this leads to unsound statistical reasoning. Moreover, currently only verbal conclusions on a very coarse scale can be made about the match between two samples, whereas a finer quantitative assessment would often be preferred. To address these issues, we propose a Bayesian predictive approach for evaluating the similarity between the chemical compositions of two oil samples. We derive the underlying statistical model from some basic assumptions on modeling assays in analytical chemistry, and to further facilitate and improve numerical evaluations, we develop analytical expressions for the key elements of Bayesian inference for this model. The approach is illustrated with both simulated and real data and is shown to have appealing properties in comparison with both standard frequentist and Bayesian approaches
Resumo:
Structural equation modeling (SEM) is a powerful statistical approach for the testing of networks of direct and indirect theoretical causal relationships in complex data sets with intercorrelated dependent and independent variables. SEM is commonly applied in ecology, but the spatial information commonly found in ecological data remains difficult to model in a SEM framework. Here we propose a simple method for spatially explicit SEM (SE-SEM) based on the analysis of variance/covariance matrices calculated across a range of lag distances. This method provides readily interpretable plots of the change in path coefficients across scale and can be implemented using any standard SEM software package. We demonstrate the application of this method using three studies examining the relationships between environmental factors, plant community structure, nitrogen fixation, and plant competition. By design, these data sets had a spatial component, but were previously analyzed using standard SEM models. Using these data sets, we demonstrate the application of SE-SEM to regularly spaced, irregularly spaced, and ad hoc spatial sampling designs and discuss the increased inferential capability of this approach compared with standard SEM. We provide an R package, sesem, to easily implement spatial structural equation modeling.
Resumo:
A single-generation dataset consisting of 1,730 records from a selection program for high growth rate in giant freshwater prawn (GFP, Macrobrachium rosenbergii) was used to derive prediction equations for meat weight and meat yield. Models were based on body traits [body weight, total length and abdominal width (AW)] and carcass measurements (tail weight and exoskeleton-off weight). Lengths and width were adjusted for the systematic effects of selection line, male morphotypes and female reproductive status, and for the covariables of age at slaughter within sex and body weight. Body and meat weights adjusted for the same effects (except body weight) were used to calculate meat yield (expressed as percentage of tail weight/body weight and exoskeleton-off weight/body weight). The edible meat weight and yield in this GFP population ranged from 12 to 15 g and 37 to 45 %, respectively. The simple (Pearson) correlation coefficients between body traits (body weight, total length and AW) and meat weight were moderate to very high and positive (0.75–0.94), but the correlations between body traits and meat yield were negative (−0.47 to −0.74). There were strong linear positive relationships between measurements of body traits and meat weight, whereas relationships of body traits with meat yield were moderate and negative. Step-wise multiple regression analysis showed that the best model to predict meat weight included all body traits, with a coefficient of determination (R 2) of 0.99 and a correlation between observed and predicted values of meat weight of 0.99. The corresponding figures for meat yield were 0.91 and 0.95, respectively. Body weight or length was the best predictor of meat weight, explaining 91–94 % of observed variance when it was fitted alone in the model. By contrast, tail width explained a lower proportion (69–82 %) of total variance in the single trait models. It is concluded that in practical breeding programs, improvement of meat weight can be easily made through indirect selection for body trait combinations. The improvement of meat yield, albeit being more difficult, is possible by genetic means, with 91 % of the variation in the trait explained by the body and carcass traits examined in this study.
Resumo:
This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.
Resumo:
Pesticide use in paddy rice production may contribute to adverse ecological effects in surface waters. Risk assessments conducted for regulatory purposes depend on the use of simulation models to determine predicted environment concentrations (PEC) of pesticides. Often tiered approaches are used, in which assessments at lower tiers are based on relatively simple models with conservative scenarios, while those at higher tiers have more realistic representations of physical and biochemical processes. This chapter reviews models commonly used for predicting the environmental fate of pesticides in rice paddies. Theoretical considerations, unique features, and applications are discussed. This review is expected to provide information to guide model selection for pesticide registration, regulation, and mitigation in rice production areas.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on workshops where process stakeholders together with modeling experts create a graphical visualization of a process in a model. Within these workshops, stakeholders are mostly limited to verbal contributions, which are integrated into a process model by a modeling expert using traditional input devices. This limitation negatively affects the collaboration outcome and also the perception of the collaboration itself. In order to overcome this problem we created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. Using this system for collaborative modeling, we expect to provide a more effective collaboration environment thus improving modeling performance and collaboration.
Resumo:
Introduction Two symposia on “cardiovascular diseases and vulnerable plaques” Cardiovascular disease (CVD) is the leading cause of death worldwide. Huge effort has been made in many disciplines including medical imaging, computational modeling, bio- mechanics, bioengineering, medical devices, animal and clinical studies, population studies as well as genomic, molecular, cellular and organ-level studies seeking improved methods for early detection, diagnosis, prevention and treatment of these diseases [1-14]. However, the mechanisms governing the initiation, progression and the occurrence of final acute clinical CVD events are still poorly understood. A large number of victims of these dis- eases who are apparently healthy die suddenly without prior symptoms. Available screening and diagnostic methods are insufficient to identify the victims before the event occurs [8,9]. Most cardiovascular diseases are associated with vulnerable plaques. A grand challenge here is to develop new imaging techniques, predictive methods and patient screening tools to identify vulnerable plaques and patients who are more vulnerable to plaque rupture and associated clinical events such as stroke and heart attack, and recommend proper treatment plans to prevent those clinical events from happening. Articles in this special issue came from two symposia held recently focusing on “Cardio-vascular Diseases and Vulnerable Plaques: Data, Modeling, Predictions and Clinical Applications.” One was held at Worcester Polytechnic Institute (WPI), Worcester, MA, USA, July 13-14, 2014, right after the 7th World Congress of Biomechanics. This symposium was endorsed by the World Council of Biomechanics, and partially supported by a grant from NIH-National Institute of Biomedical Image and Bioengineering. The other was held at Southeast University (SEU), Nanjing, China, April 18-20, 2014.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
Background Segmental biomechanics of the scoliotic spine are important since the overall spinal deformity is comprised of the cumulative coronal and axial rotations of individual joints. This study investigates the coronal plane segmental biomechanics for adolescent idiopathic scoliosis patients in response to physiologically relevant axial compression. Methods Individual spinal joint compliance in the coronal plane was measured for a series of 15 idiopathic scoliosis patients using axially loaded magnetic resonance imaging. Each patient was first imaged in the supine position with no axial load, and then again following application of an axial compressive load. Coronal plane disc wedge angles in the unloaded and loaded configurations were measured. Joint moments exerted by the axial compressive load were used to derive estimates of individual joint compliance. Findings The mean standing major Cobb angle for this patient series was 46°. Mean intra-observer measurement error for endplate inclination was 1.6°. Following loading, initially highly wedged discs demonstrated a smaller change in wedge angle, than less wedged discs for certain spinal levels (+ 2,+1,− 2 relative to the apex, (p < 0.05)). Highly wedged discs were observed near the apex of the curve, which corresponded to lower joint compliance in the apical region. Interpretation While individual patients exhibit substantial variability in disc wedge angles and joint compliance, overall there is a pattern of increased disc wedging near the curve apex, and reduced joint compliance in this region. Approaches such as this can provide valuable biomechanical data on in vivo spinal biomechanics of the scoliotic spine, for analysis of deformity progression and surgical planning.
Resumo:
With the rapid development of various technologies and applications in smart grid implementation, demand response has attracted growing research interests because of its potentials in enhancing power grid reliability with reduced system operation costs. This paper presents a new demand response model with elastic economic dispatch in a locational marginal pricing market. It models system economic dispatch as a feedback control process, and introduces a flexible and adjustable load cost as a controlled signal to adjust demand response. Compared with the conventional “one time use” static load dispatch model, this dynamic feedback demand response model may adjust the load to a desired level in a finite number of time steps and a proof of convergence is provided. In addition, Monte Carlo simulation and boundary calculation using interval mathematics are applied for describing uncertainty of end-user's response to an independent system operator's expected dispatch. A numerical analysis based on the modified Pennsylvania-Jersey-Maryland power pool five-bus system is introduced for simulation and the results verify the effectiveness of the proposed model. System operators may use the proposed model to obtain insights in demand response processes for their decision-making regarding system load levels and operation conditions.
Resumo:
Online dynamic load modeling has become possible with the availability of Static Voltage Compensator (SVC) and Phasor Measurement Unit (PMU) devices. The power of the load response to the small random bounded voltage fluctuations caused from SVC can be measured by PMU for modelling purposes. The aim of this paper is to illustrate the capability of identifying an aggregated load model from high voltage substation level in the online environment. The induction motor is used as the main test subject since it contributes the majority of the dynamic loads. A test system representing simple electromechanical generator model serving dynamic loads through the transmission network is used to verify the proposed method. Also, dynamic load with multiple induction motors are modeled to achieve a better realistic load representation.