938 resultados para Predictive models


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Crisis holds the potential for profound change in organizations and industries. The past 50 years of crisis management highlight key shifts in crisis practice, creating opportunities for multiple theories and research tracks. Defining crises such as Tylenol, Exxon Valdez, and September 11 terrorist attacks have influenced or challenged the principles of best practice of crisis communication in public relations. This study traces the development of crisis process and practice by identifying shifts in crisis research and models and mapping these against key management theories and practices. The findings define three crisis domains: crisis planning, building and testing predictive models, and mapping and measuring external environmental influences. These crisis domains mirror but lag the evolution of management theory, suggesting challenges for researchers to reshape the research agenda to close the gap and lead the next stage of development in the field of crisis communication for effective organizational outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To date, the formation of deposits on heat exchanger surfaces is the least understood problem in the design of heat exchangers for processing industries. Dr East has related the structure of the deposits to solution composition and has developed predictive models for composite fouling of calcium oxalate and silica in sugar factory evaporators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To establish injury rates among a population of elite athletes, to provide normative data for psychological variables hypothesised to be predictive of sport injuries, and to establish relations between measures of mood, perceived life stress, and injury characteristics as a precursor to introducing a psychological intervention to ameliorate the injury problem. Methods: As part of annual screening procedures, athletes at the Queensland Academy of Sport report medical and psychological status. Data from 845 screenings (433 female and 412 male athletes) were reviewed. Population specific tables of normative data were established for the Brunel mood scale and the perceived stress scale. Results: About 67% of athletes were injured each year, and about 18% were injured at the time of screening. Fifty percent of variance in stress scores could be predicted from mood scores, especially for vigour, depression, and tension. Mood and stress scores collectively had significant utility in predicting injury characteristics. Injury status (current, healed, no injury) was correctly classified with 39% accuracy, and back pain with 48% accuracy. Among a subset of 233 uninjured athletes (116 female and 117 male), five mood dimensions (anger, confusion, fatigue, tension, depression) were significantly related to orthopaedic incidents over the preceding 12 months, with each mood dimension explaining 6–7% of the variance. No sex differences in these relations were found. Conclusions: The findings support suggestions that psychological measures have utility in predicting athletic injury, although the relatively modest explained variance highlights the need to also include underlying physiological indicators of allostatic load, such as stress hormones, in predictive models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the Department of Defense's most pressing environmental problems is the efficient detection and identification of unexploded ordnance (UXO). In regions of highly magnetic soils, magnetic and electromagnetic sensors often detect anomalies that are of geologic origin, adding significantly to remediation costs. In order to develop predictive models for magnetic susceptibility, it is crucial to understand modes of formation and the spatial distribution of different iron oxides. Most rock types contain iron and their magnetic susceptibility is determined by the amount and form of iron oxides present. When rocks weather, the amount and form of the oxides change, producing concomitant changes in magnetic susceptibility. The type of iron oxide found in the weathered rock or regolith is a function of the duration and intensity of weathering, as well as the original content of iron in the parent material. The rate of weathering is controlled by rainfall and temperature; thus knowing the climate zone, the amount of iron in the lithology and the age of the surface will help predict the amount and forms of iron oxide. We have compiled analyses of the types, amounts, and magnetic properties of iron oxides from soils over a wide climate range, from semi arid grasslands, to temperate regions, and tropical forests. We find there is a predictable range of iron oxide type and magnetic susceptibility according to the climate zone, the age of the soil and the amount of iron in the unweathered regolith.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spectral data were collected of intact and ground kernels using 3 instruments (using Si-PbS, Si, and InGaAs detectors), operating over different areas of the spectrum (between 400 and 2500 nm) and employing transmittance, interactance, and reflectance sample presentation strategies. Kernels were assessed on the basis of oil and water content, and with respect to the defect categories of insect damage, rancidity, discoloration, mould growth, germination, and decomposition. Predictive model performance statistics for oil content models were acceptable on all instruments (R2 > 0.98; RMSECV < 2.5%, which is similar to reference analysis error), although that for the instrument employing reflectance optics was inferior to models developed for the instruments employing transmission optics. The spectral positions for calibration coefficients were consistent with absorbance due to the third overtones of CH2 stretching. Calibration models for moisture content in ground samples were acceptable on all instruments (R2 > 0.97; RMSECV < 0.2%), whereas calibration models for intact kernels were relatively poor. Calibration coefficients were more highly weighted around 1360, 740 and 840 nm, consistent with absorbance due to overtones of O-H stretching and combination. Intact kernels with brown centres or rancidity could be discriminated from each other and from sound kernels using principal component analysis. Part kernels affected by insect damage, discoloration, mould growth, germination, and decomposition could be discriminated from sound kernels. However, discrimination among these defect categories was not distinct and could not be validated on an independent set. It is concluded that there is good potential for a low cost Si photodiode array instrument to be employed to identify some quality defects of intact macadamia kernels and to quantify oil and moisture content of kernels in the process laboratory and for oil content in-line. Further work is required to examine the robustness of predictive models across different populations, including growing districts, cultivars and times of harvest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Predictive models based on near infra-red spectroscopy for the assessment of fruit internal quality attributes must exhibit a degree of robustness across the parameters of variety, district and time to be of practical use in fruit grading. At the time this thesis was initiated, while there were a number of published reports on the development of near infra-red based calibration models for the assessment of internal quality attributes of intact fruit, there were no reports of the reliability ("robustness") of such models across time, cultivars or growing regions. As existing published reports varied in instrumentation employed, a re-analysis of existing data was not possible. An instrument platform, based on partial transmittance optics, a halogen light source and (Zeiss MMS 1) detector operating in the short wavelength near infra-red region was developed for use in the assessment of intact fruit. This platform was used to assess populations of macadamia kernels, melons and mandarin fruit for total soluble solids, dry matter and oil concentration. Calibration procedures were optimised and robustness assessed across growing areas, time of harvest, season and variety. In general, global modified partial least squares regression (MPLS) calibration models based on derivatised absorbance data were better than either multiple linear regression or `local' MPLS models in the prediction of independent validation populations . Robustness was most affected by growing season, relative to the growing district or variety . Various calibration updating procedures were evaluated in terms of calibration robustness. Random selection of samples from the validation population for addition to the calibration population was equivalent to or better than other methods of sample addition (methods based on the Mahalanobis distance of samples from either the centroid of the population or neighbourhood samples). In these exercises the global Mahalanobis distance (GH) was calculated using the scores and loadings from the calibration population on the independent validation population. In practice, it is recommended that model predictive performance be monitored in terms of predicted sample GH, with model updating using as few as 10 samples from the new population undertaken when the average GH value exceeds 1 .0 .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aim: To develop approaches to the evaluation of programmes whose strategic objectives are to halt or slow weed spread. Location: Australia. Methods: Key aspects in the evaluation of weed containment programmes are considered. These include the relevance of models that predict the effects of management intervention on spread, the detection of spread, evidence for containment failure and metrics for absolute or partial containment. Case studies documenting either near-absolute (Orobanche ramosa L., branched broomrape) or partial (Parthenium hysterophorus (L.) King and Robinson, parthenium) containment are presented. Results: While useful for informing containment strategies, predictive models cannot be employed in containment programme evaluation owing to the highly stochastic nature of realized weed spread. The quality of observations is critical to the timely detection of weed spread. Effectiveness of surveillance and monitoring activities will be improved by utilizing information on habitat suitability and identification of sites from which spread could most compromise containment. Proof of containment failure may be difficult to obtain. The default option of assuming that a new detection represents containment failure could lead to an underestimate of containment success, the magnitude of which will depend on how often this assumption is made. Main conclusions: Evaluation of weed containment programmes will be relatively straightforward if containment is either absolute or near-absolute and may be based on total containment area and direct measures of containment failure, for example, levels of dispersal, establishment and reproduction beyond (but proximal to) the containment line. Where containment is only partial, other measures of containment effectiveness will be required. These may include changes in the rates of detection of new infestations following the institution of interventions designed to reduce dispersal, the degree of compliance with such interventions, and the effectiveness of tactics intended to reduce fecundity or other demographic drivers of spread. © 2012 Blackwell Publishing Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Near infrared (NIR) spectroscopy was investigated as a potential rapid method of estimating fish age from whole otoliths of Saddletail snapper (Lutjanus malabaricus). Whole otoliths from 209 Saddletail snapper were extracted and the NIR spectral characteristics were acquired over a spectral range of 800–2780 nm. Partial least-squares models (PLS) were developed from the diffuse reflectance spectra and reference-validated age estimates (based on traditional sectioned otolith increments) to predict age for independent otolith samples. Predictive models developed for a specific season and geographical location performed poorly against a different season and geographical location. However, overall PLS regression statistics for predicting a combined population incorporating both geographic location and season variables were: coefficient of determination (R2) = 0.94, root mean square error of prediction (RMSEP) = 1.54 for age estimation, indicating that Saddletail age could be predicted within 1.5 increment counts. This level of accuracy suggests the method warrants further development for Saddletail snapper and may have potential for other fish species. A rapid method of fish age estimation could have the potential to reduce greatly both costs of time and materials in the assessment and management of commercial fisheries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The project aimed to evaluate the innovative application of NIRS as a reliable, repeatable, and cost-effective method of ageing fish, using otoliths of Barramundi and Snapper as study species. Specific research questions included assessing how geographic and seasonal variation in otoliths affects NIRS predictive models of fish age, as well as how the NIR spectra of otoliths change in the short-term (i.e., <12 months) and long-term (i.e., historical otolith collections) and what effect this has on the predictive ability of NIRS models. The cost-effectiveness of using NIRS to supplement standard fish ageing methods was also evaluated using a hypothetical case study of Barramundi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Solidification processes are complex in nature, involving multiple phases and several length scales. The properties of solidified products are dictated by the microstructure, the mactostructure, and various defects present in the casting. These, in turn, are governed by the multiphase transport phenomena Occurring at different length scales. In order to control and improve the quality of cast products, it is important to have a thorough understanding of various physical and physicochemical phenomena Occurring at various length scales. preferably through predictive models and controlled experiments. In this context, the modeling of transport phenomena during alloy solidification has evolved over the last few decades due to the complex multiscale nature of the problem. Despite this, a model accounting for all the important length scales directly is computationally prohibitive. Thus, in the past, single-phase continuum models have often been employed with respect to a single length scale to model solidification processing. However, continuous development in understanding the physics of solidification at various length scales oil one hand and the phenomenal growth of computational power oil the other have allowed researchers to use increasingly complex multiphase/multiscale models in recent. times. These models have allowed greater understanding of the coupled micro/macro nature of the process and have made it possible to predict solute segregation and microstructure evolution at different length scales. In this paper, a brief overview of the current status of modeling of convection and macrosegregation in alloy solidification processing is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ruptured abdominal aortic aneurysm (RAAA) is a life-threatening event, and without operative treatment the patient will die. The overall mortality can be as high as 80-90%; thus repair of RAAA should be attempted whenever feasible. The quality of life (QoL) has become an increasingly important outcome measure in vascular surgery. Aim of the study was to evaluate outcomes of RAAA and to find out predictors of mortality. In Helsinki and Uusimaa district 626 patients were identified to have RAAA in 1996-2004. Altogether 352 of them were admitted to Helsinki University Central Hospital (HUCH). Based on Finnvasc Registry, 836 RAAA patients underwent repair of RAAA in 1991-1999. The 30-day operative mortality, hospital and population-based mortality were assessed, and the effect of regional centralisation and improving in-hospital quality on the outcome of RAAA. QoL was evaluated by a RAND-36 questionnaire of survivors of RAAA. Quality-adjusted life years (QALYs), which measure length and QoL, were calculated using the EQ-5D index and estimation of life expectancy. The predictors of outcome after RAAA were assessed at admission and 48 hours after repair of RAAA. The 30-day operative mortality rate was 38% in HUCH and 44% nationwide, whereas the hospital mortality was 45% in HUCH. Population-based mortality was 69% in 1996-2004 and 56% in 2003-2004. After organisational changes were undertaken, the mortality decreased significantly at all levels. Among the survivors, the QoL was almost equal when compared with norms of age- and sex-matched controls; only physical functioning was slightly impaired. Successful repair of RAAA gave a mean of 4.1 (0-30.9) QALYs for all RAAA patients, although non-survivors were included. The preoperative Glasgow Aneurysm Score was an independent predictor of 30-day operative mortality after RAAA, and it also predicted the outcome at 48- hours for initial survivors of repair of RAAA. A high Glasgow Aneurysm Score and high age were associated with low numbers of QALYs to be achieved. Organ dysfunction measured by the Sequential Organ Failure Assessment (SOFA) score at 48 hours after repair of RAAA was the strongest predictor of death. In conclusion surgery of RAAA is a life-saving and cost-effective procedure. The centralisation of vascular emergencies improved the outcome of RAAA patients. The survivors had a good QoL after RAAA. Predictive models can be used on individual level only to provide supplementary information for clinical decision-making due to their moderate discriminatory value. These results support an active operation policy, as there is no reliable measure to predict the outcome after RAAA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Predicting temporal responses of ecosystems to disturbances associated with industrial activities is critical for their management and conservation. However, prediction of ecosystem responses is challenging due to the complexity and potential non-linearities stemming from interactions between system components and multiple environmental drivers. Prediction is particularly difficult for marine ecosystems due to their often highly variable and complex natures and large uncertainties surrounding their dynamic responses. Consequently, current management of such systems often rely on expert judgement and/or complex quantitative models that consider only a subset of the relevant ecological processes. Hence there exists an urgent need for the development of whole-of-systems predictive models to support decision and policy makers in managing complex marine systems in the context of industry based disturbances. This paper presents Dynamic Bayesian Networks (DBNs) for predicting the temporal response of a marine ecosystem to anthropogenic disturbances. The DBN provides a visual representation of the problem domain in terms of factors (parts of the ecosystem) and their relationships. These relationships are quantified via Conditional Probability Tables (CPTs), which estimate the variability and uncertainty in the distribution of each factor. The combination of qualitative visual and quantitative elements in a DBN facilitates the integration of a wide array of data, published and expert knowledge and other models. Such multiple sources are often essential as one single source of information is rarely sufficient to cover the diverse range of factors relevant to a management task. Here, a DBN model is developed for tropical, annual Halophila and temperate, persistent Amphibolis seagrass meadows to inform dredging management and help meet environmental guidelines. Specifically, the impacts of capital (e.g. new port development) and maintenance (e.g. maintaining channel depths in established ports) dredging is evaluated with respect to the risk of permanent loss, defined as no recovery within 5 years (Environmental Protection Agency guidelines). The model is developed using expert knowledge, existing literature, statistical models of environmental light, and experimental data. The model is then demonstrated in a case study through the analysis of a variety of dredging, environmental and seagrass ecosystem recovery scenarios. In spatial zones significantly affected by dredging, such as the zone of moderate impact, shoot density has a very high probability of being driven to zero by capital dredging due to the duration of such dredging. Here, fast growing Halophila species can recover, however, the probability of recovery depends on the presence of seed banks. On the other hand, slow growing Amphibolis meadows have a high probability of suffering permanent loss. However, in the maintenance dredging scenario, due to the shorter duration of dredging, Amphibolis is better able to resist the impacts of dredging. For both types of seagrass meadows, the probability of loss was strongly dependent on the biological and ecological status of the meadow, as well as environmental conditions post-dredging. The ability to predict the ecosystem response under cumulative, non-linear interactions across a complex ecosystem highlights the utility of DBNs for decision support and environmental management.