939 resultados para model performance


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The robustness of multivariate calibration models, based on near infrared spectroscopy, for the assessment of total soluble solids (TSS) and dry matter (DM) of intact mandarin fruit (Citrus reticulata cv. Imperial) was assessed. TSS calibration model performance was validated in terms of prediction of populations of fruit not in the original population (different harvest days from a single tree, different harvest localities, different harvest seasons). Of these, calibration performance was most affected by validation across seasons (signal to noise statistic on root mean squared error of prediction of 3.8, compared with 20 and 13 for locality and harvest day, respectively). Procedures for sample selection from the validation population for addition to the calibration population (‘model updating’) were considered for both TSS and DM models. Random selection from the validation group worked as well as more sophisticated selection procedures, with approximately 20 samples required. Models that were developed using samples at a range of temperatures were robust in validation for TSS and DM.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optimal allocation of water resources for various stakeholders often involves considerable complexity with several conflicting goals, which often leads to multi-objective optimization. In aid of effective decision-making to the water managers, apart from developing effective multi-objective mathematical models, there is a greater necessity of providing efficient Pareto optimal solutions to the real world problems. This study proposes a swarm-intelligence-based multi-objective technique, namely the elitist-mutated multi-objective particle swarm optimization technique (EM-MOPSO), for arriving at efficient Pareto optimal solutions to the multi-objective water resource management problems. The EM-MOPSO technique is applied to a case study of the multi-objective reservoir operation problem. The model performance is evaluated by comparing with results of a non-dominated sorting genetic algorithm (NSGA-II) model, and it is found that the EM-MOPSO method results in better performance. The developed method can be used as an effective aid for multi-objective decision-making in integrated water resource management.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper uses a new method for describing dynamic comovement and persistence in economic time series which builds on the contemporaneous forecast error method developed in den Haan (2000). This data description method is then used to address issues in New Keynesian model performance in two ways. First, well known data patterns, such as output and inflation leads and lags and inflation persistence, are decomposed into forecast horizon components to give a more complete description of the data patterns. These results show that the well known lead and lag patterns between output and inflation arise mostly in the medium term forecasts horizons. Second, the data summary method is used to investigate a rich New Keynesian model with many modeling features to see which of these features can reproduce lead, lag and persistence patterns seen in the data. Many studies have suggested that a backward looking component in the Phillips curve is needed to match the data, but our simulations show this is not necessary. We show that a simple general equilibrium model with persistent IS curve shocks and persistent supply shocks can reproduce the lead, lag and persistence patterns seen in the data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper analyzes the cyclical properties of a generalized version of Uzawa-Lucas endogenous growth model. We study the dynamic features of different cyclical components of this model characterized by a variety of decomposition methods. The decomposition methods considered can be classified in two groups. On the one hand, we consider three statistical filters: the Hodrick-Prescott filter, the Baxter-King filter and Gonzalo-Granger decomposition. On the other hand, we use four model-based decomposition methods. The latter decomposition procedures share the property that the cyclical components obtained by these methods preserve the log-linear approximation of the Euler-equation restrictions imposed by the agent’s intertemporal optimization problem. The paper shows that both model dynamics and model performance substantially vary across decomposition methods. A parallel exercise is carried out with a standard real business cycle model. The results should help researchers to better understand the performance of Uzawa-Lucas model in relation to standard business cycle models under alternative definitions of the business cycle.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Tagging experiments are a useful tool in fisheries for estimating mortality rates and abundance of fish. Unfortunately, nonreporting of recovered tags is a common problem in commercial fisheries which, if unaccounted for, can render these estimates meaningless. Observers are often employed to monitor a portion of the catches as a means of estimating reporting rates. In our study, observer data were incorporated into an integrated model for multiyear tagging and catch data to provide joint estimates of mortality rates (natural and f ishing), abundance, and reporting rates. Simulations were used to explore model performance under a range of scenarios (e.g., different parameter values, parameter constraints, and numbers of release and recapture years). Overall, results indicated that all parameters can be estimated with reasonable accuracy, but that fishing mortality, reporting rates, and abundance can be estimated with much higher precision than natural mortality. An example of how the model can be applied to provide guidance on experimental design for a large-scale tagging study is presented. Such guidance can contribute to the successful and cost-effective management of tagging programs for commercial fisheries.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Brian Huntley, Rhys E. Green, Yvonne C. Collingham, Jane K. Hill, Stephen G. Willis , Patrick J. Bartlein, Wolfgang Cramer, Ward J. M. Hagemeijer and Christopher J. Thomas (2004). The performance of models relating species geographical distributions to climate is independent of trophic level. Ecology Letters, 7(5), 417-426. Sponsorship: NERC (awards: GR9/3016, GR9/04270, GR3/12542, NER/F/S/2000/00166) / RSPB RAE2008

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Much sensory-motor behavior develops through imitation, as during the learning of handwriting by children. Such complex sequential acts are broken down into distinct motor control synergies, or muscle groups, whose activities overlap in time to generate continuous, curved movements that obey an intense relation between curvature and speed. The Adaptive Vector Integration to Endpoint (AVITEWRITE) model of Grossberg and Paine (2000) proposed how such complex movements may be learned through attentive imitation. The model suggest how frontal, parietal, and motor cortical mechanisms, such as difference vector encoding, under volitional control from the basal ganglia, interact with adaptively-timed, predictive cerebellar learning during movement imitation and predictive performance. Key psycophysical and neural data about learning to make curved movements were simulated, including a decrease in writing time as learning progresses; generation of unimodal, bell-shaped velocity profiles for each movement synergy; size scaling with isochrony, and speed scaling with preservation of the letter shape and the shapes of the velocity profiles; an inverse relation between curvature and tangential velocity; and a Two-Thirds Power Law relation between angular velocity and curvature. However, the model learned from letter trajectories of only one subject, and only qualitative kinematic comparisons were made with previously published human data. The present work describes a quantitative test of AVITEWRITE through direct comparison of a corpus of human handwriting data with the model's performance when it learns by tracing human trajectories. The results show that model performance was variable across subjects, with an average correlation between the model and human data of 89+/-10%. The present data from simulations using the AVITEWRITE model highlight some of its strengths while focusing attention on areas, such as novel shape learning in children, where all models of handwriting and learning of other complex sensory-motor skills would benefit from further research.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

INTRODUCTION: We previously reported models that characterized the synergistic interaction between remifentanil and sevoflurane in blunting responses to verbal and painful stimuli. This preliminary study evaluated the ability of these models to predict a return of responsiveness during emergence from anesthesia and a response to tibial pressure when patients required analgesics in the recovery room. We hypothesized that model predictions would be consistent with observed responses. We also hypothesized that under non-steady-state conditions, accounting for the lag time between sevoflurane effect-site concentration (Ce) and end-tidal (ET) concentration would improve predictions. METHODS: Twenty patients received a sevoflurane, remifentanil, and fentanyl anesthetic. Two model predictions of responsiveness were recorded at emergence: an ET-based and a Ce-based prediction. Similarly, 2 predictions of a response to noxious stimuli were recorded when patients first required analgesics in the recovery room. Model predictions were compared with observations with graphical and temporal analyses. RESULTS: While patients were anesthetized, model predictions indicated a high likelihood that patients would be unresponsive (> or = 99%). However, after termination of the anesthetic, models exhibited a wide range of predictions at emergence (1%-97%). Although wide, the Ce-based predictions of responsiveness were better distributed over a percentage ranking of observations than the ET-based predictions. For the ET-based model, 45% of the patients awoke within 2 min of the 50% model predicted probability of unresponsiveness and 65% awoke within 4 min. For the Ce-based model, 45% of the patients awoke within 1 min of the 50% model predicted probability of unresponsiveness and 85% awoke within 3.2 min. Predictions of a response to a painful stimulus in the recovery room were similar for the Ce- and ET-based models. DISCUSSION: Results confirmed, in part, our study hypothesis; accounting for the lag time between Ce and ET sevoflurane concentrations improved model predictions of responsiveness but had no effect on predicting a response to a noxious stimulus in the recovery room. These models may be useful in predicting events of clinical interest but large-scale evaluations with numerous patients are needed to better characterize model performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Intraoperative assessment of surgical margins is critical to ensuring residual tumor does not remain in a patient. Previously, we developed a fluorescence structured illumination microscope (SIM) system with a single-shot field of view (FOV) of 2.1 × 1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 μm). The goal of this study was to test the utility of this technology for the detection of residual disease in a genetically engineered mouse model of sarcoma. Primary soft tissue sarcomas were generated in the hindlimb and after the tumor was surgically removed, the relevant margin was stained with acridine orange (AO), a vital stain that brightly stains cell nuclei and fibrous tissues. The tissues were imaged with the SIM system with the primary goal of visualizing fluorescent features from tumor nuclei. Given the heterogeneity of the background tissue (presence of adipose tissue and muscle), an algorithm known as maximally stable extremal regions (MSER) was optimized and applied to the images to specifically segment nuclear features. A logistic regression model was used to classify a tissue site as positive or negative by calculating area fraction and shape of the segmented features that were present and the resulting receiver operator curve (ROC) was generated by varying the probability threshold. Based on the ROC curves, the model was able to classify tumor and normal tissue with 77% sensitivity and 81% specificity (Youden's index). For an unbiased measure of the model performance, it was applied to a separate validation dataset that resulted in 73% sensitivity and 80% specificity. When this approach was applied to representative whole margins, for a tumor probability threshold of 50%, only 1.2% of all regions from the negative margin exceeded this threshold, while over 14.8% of all regions from the positive margin exceeded this threshold.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work demonstrates an example of the importance of an adequate method to sub-sample model results when comparing with in situ measurements. A test of model skill was performed by employing a point-to-point method to compare a multi-decadal hindcast against a sparse, unevenly distributed historic in situ dataset. The point-to-point method masked out all hindcast cells that did not have a corresponding in situ measurement in order to match each in situ measurement against its most similar cell from the model. The application of the point-to-point method showed that the model was successful at reproducing the inter-annual variability of the in situ datasets. Furthermore, this success was not immediately apparent when the measurements were aggregated to regional averages. Time series, data density and target diagrams were employed to illustrate the impact of switching from the regional average method to the point-to-point method. The comparison based on regional averages gave significantly different and sometimes contradicting results that could lead to erroneous conclusions on the model performance. Furthermore, the point-to-point technique is a more correct method to exploit sparse uneven in situ data while compensating for the variability of its sampling. We therefore recommend that researchers take into account for the limitations of the in situ datasets and process the model to resemble the data as much as possible.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The motivation for this paper is to present an approach for rating the quality of the parameters in a computer-aided design model for use as optimization variables. Parametric Effectiveness is computed as the ratio of change in performance achieved by perturbing the parameters in the optimum way, to the change in performance that would be achieved by allowing the boundary of the model to move without the constraint on shape change enforced by the CAD parameterization. The approach is applied in this paper to optimization based on adjoint shape sensitivity analyses. The derivation of parametric effectiveness is presented for optimization both with and without the constraint of constant volume. In both cases, the movement of the boundary is normalized with respect to a small root mean squared movement of the boundary. The approach can be used to select an initial search direction in parameter space, or to select sets of model parameters which have the greatest ability to improve model performance. The approach is applied to a number of example 2D and 3D FEA and CFD problems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In den letzten Jahrzehnten haben sich makroskalige hydrologische Modelle als wichtige Werkzeuge etabliert um den Zustand der globalen erneuerbaren Süßwasserressourcen flächendeckend bewerten können. Sie werden heutzutage eingesetzt um eine große Bandbreite wissenschaftlicher Fragestellungen zu beantworten, insbesondere hinsichtlich der Auswirkungen anthropogener Einflüsse auf das natürliche Abflussregime oder der Auswirkungen des globalen Wandels und Klimawandels auf die Ressource Wasser. Diese Auswirkungen lassen sich durch verschiedenste wasserbezogene Kenngrößen abschätzen, wie z.B. erneuerbare (Grund-)Wasserressourcen, Hochwasserrisiko, Dürren, Wasserstress und Wasserknappheit. Die Weiterentwicklung makroskaliger hydrologischer Modelle wurde insbesondere durch stetig steigende Rechenkapazitäten begünstigt, aber auch durch die zunehmende Verfügbarkeit von Fernerkundungsdaten und abgeleiteten Datenprodukten, die genutzt werden können, um die Modelle anzutreiben und zu verbessern. Wie alle makro- bis globalskaligen Modellierungsansätze unterliegen makroskalige hydrologische Simulationen erheblichen Unsicherheiten, die (i) auf räumliche Eingabedatensätze, wie z.B. meteorologische Größen oder Landoberflächenparameter, und (ii) im Besonderen auf die (oftmals) vereinfachte Abbildung physikalischer Prozesse im Modell zurückzuführen sind. Angesichts dieser Unsicherheiten ist es unabdingbar, die tatsächliche Anwendbarkeit und Prognosefähigkeit der Modelle unter diversen klimatischen und physiographischen Bedingungen zu überprüfen. Bisher wurden die meisten Evaluierungsstudien jedoch lediglich in wenigen, großen Flusseinzugsgebieten durchgeführt oder fokussierten auf kontinentalen Wasserflüssen. Dies steht im Kontrast zu vielen Anwendungsstudien, deren Analysen und Aussagen auf simulierten Zustandsgrößen und Flüssen in deutlich feinerer räumlicher Auflösung (Gridzelle) basieren. Den Kern der Dissertation bildet eine umfangreiche Evaluierung der generellen Anwendbarkeit des globalen hydrologischen Modells WaterGAP3 für die Simulation von monatlichen Abflussregimen und Niedrig- und Hochwasserabflüssen auf Basis von mehr als 2400 Durchflussmessreihen für den Zeitraum 1958-2010. Die betrachteten Flusseinzugsgebiete repräsentieren ein breites Spektrum klimatischer und physiographischer Bedingungen, die Einzugsgebietsgröße reicht von 3000 bis zu mehreren Millionen Quadratkilometern. Die Modellevaluierung hat dabei zwei Zielsetzungen: Erstens soll die erzielte Modellgüte als Bezugswert dienen gegen den jegliche weiteren Modellverbesserungen verglichen werden können. Zweitens soll eine Methode zur diagnostischen Modellevaluierung entwickelt und getestet werden, die eindeutige Ansatzpunkte zur Modellverbesserung aufzeigen soll, falls die Modellgüte unzureichend ist. Hierzu werden komplementäre Modellgütemaße mit neun Gebietsparametern verknüpft, welche die klimatischen und physiographischen Bedingungen sowie den Grad anthropogener Beeinflussung in den einzelnen Einzugsgebieten quantifizieren. WaterGAP3 erzielt eine mittlere bis hohe Modellgüte für die Simulation von sowohl monatlichen Abflussregimen als auch Niedrig- und Hochwasserabflüssen, jedoch sind für alle betrachteten Modellgütemaße deutliche räumliche Muster erkennbar. Von den neun betrachteten Gebietseigenschaften weisen insbesondere der Ariditätsgrad und die mittlere Gebietsneigung einen starken Einfluss auf die Modellgüte auf. Das Modell tendiert zur Überschätzung des jährlichen Abflussvolumens mit steigender Aridität. Dieses Verhalten ist charakteristisch für makroskalige hydrologische Modelle und ist auf die unzureichende Abbildung von Prozessen der Abflussbildung und –konzentration in wasserlimitierten Gebieten zurückzuführen. In steilen Einzugsgebieten wird eine geringe Modellgüte hinsichtlich der Abbildung von monatlicher Abflussvariabilität und zeitlicher Dynamik festgestellt, die sich auch in der Güte der Niedrig- und Hochwassersimulation widerspiegelt. Diese Beobachtung weist auf notwendige Modellverbesserungen in Bezug auf (i) die Aufteilung des Gesamtabflusses in schnelle und verzögerte Abflusskomponente und (ii) die Berechnung der Fließgeschwindigkeit im Gerinne hin. Die im Rahmen der Dissertation entwickelte Methode zur diagnostischen Modellevaluierung durch Verknüpfung von komplementären Modellgütemaßen und Einzugsgebietseigenschaften wurde exemplarisch am Beispiel des WaterGAP3 Modells erprobt. Die Methode hat sich als effizientes Werkzeug erwiesen, um räumliche Muster in der Modellgüte zu erklären und Defizite in der Modellstruktur zu identifizieren. Die entwickelte Methode ist generell für jedes hydrologische Modell anwendbar. Sie ist jedoch insbesondere für makroskalige Modelle und multi-basin Studien relevant, da sie das Fehlen von feldspezifischen Kenntnissen und gezielten Messkampagnen, auf die üblicherweise in der Einzugsgebietsmodellierung zurückgegriffen wird, teilweise ausgleichen kann.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper investigates the use of data assimilation in coastal area morphodynamic modelling using Morecambe Bay as a study site. A simple model of the bay has been enhanced with a data assimilation scheme to better predict large-scale changes in bathymetry observed in the bay over a 3-year period. The 2DH decoupled morphodynamic model developed for the work is described, as is the optimal interpolation scheme used to assimilate waterline observations into the model run. Each waterline was acquired from a SAR satellite image and is essentially a contour of the bathymetry at some level within the inter-tidal zone of the bay. For model parameters calibrated against validation observations, model performance is good, even without data assimilation. However the use of data assimilation successfully compensates for a particular failing of the model, and helps to keep the model bathymetry on track. It also improves the ability of the model to predict future bathymetry. Although the benefits of data assimilation are demonstrated using waterline observations, any observations of morphology could potentially be used. These results suggest that data assimilation should be considered for use in future coastal area morphodynamic models.