889 resultados para spse model (situation, problem, solution, evaluation)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is something peculiar about aesthetic testimony. It seems more difficult to gain knowledge of aesthetic properties based solely upon testimony than it is in the case of other types of property. In this paper, I argue that we can provide an adequate explanation at the level of the semantics of aesthetic language, without defending any substantive thesis in epistemology or about aesthetic value/judgement. If aesthetic predicates are given a non-invariantist semantics, we can explain the supposed peculiar difficulty with aesthetic testimony.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000–2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October–January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo–Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of secondary inorganic aerosol are suppressed in the models because relative humidity (RH) is biased far too low in the boundary layer and thus foggy conditions are poorly represented in current models, the nitrate aerosol is either missing or inadequately accounted for, and emissions from agricultural waste burning and biofuel usage are too low in the emission inventories. These common problems and possible causes found in multiple models point out directions for future model improvements in this important region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated the processes of how adult readers evaluate and revise their situation model during reading by monitoring their eye movements as they read narrative texts and subsequent critical sentences. In each narrative text, a short introduction primed a knowledge-based inference, followed by a target concept that was either expected (e.g., “oven”) or unexpected (e.g., “grill”) in relation to the inferred concept. Eye movements showed that readers detected a mismatch between the new unexpected information and their prior interpretation, confirming their ability to evaluate inferential information. Just below the narrative text, a critical sentence included a target word that was either congruent (e.g., “roasted”) or incongruent (e.g., “barbecued”) with the expected but not the unexpected concept. Readers spent less time reading the congruent than the incongruent target word, reflecting the facilitation of prior information. In addition, when the unexpected (but not expected) concept had been presented, participants with lower verbal (but not visuospatial) working memory span exhibited longer reading times and made more regressions (from the critical sentence to previous information) on encountering congruent information, indicating difficulty in inhibiting their initial incorrect interpretation and revising their situation model

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We test the ability of a two-dimensional flux model to simulate polynya events with narrow open-water zones by comparing model results to ice-thickness and ice-production estimates derived from thermal infrared Moderate Resolution Imaging Spectroradiometer (MODIS) observations in conjunction with an atmospheric dataset. Given a polynya boundary and an atmospheric dataset, the model correctly reproduces the shape of an 11 day long event, using only a few simple conservation laws. Ice production is slightly overestimated by the model, owing to an underestimated ice thickness. We achieved best model results with the consolidation thickness parameterization developed by Biggs and others (2000). Observed regional discrepancies between model and satellite estimates might be a consequence of the missing representation of the dynamic of the thin-ice thickening (e.g. rafting). We conclude that this simplified polynya model is a valuable tool for studying polynya dynamics and estimating associated fluxes of single polynya events.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Heavy precipitation affected Central Europe in May/June 2013, triggering damaging floods both on the Danube and the Elbe rivers. Based on a modelling approach with COSMO-CLM, moisture fluxes, backward trajectories, cyclone tracks and precipitation fields are evaluated for the relevant time period 30 May–2 June 2013. We identify potential moisture sources and quantify their contribution to the flood event focusing on the Danube basin through sensitivity experiments: Control simulations are performed with undisturbed ERA-Interim boundary conditions, while multiple sensitivity experiments are driven with modified evaporation characteristics over selected marine and land areas. Two relevant cyclones are identified both in reanalysis and in our simulations, which moved counter-clockwise in a retrograde path from Southeastern Europe over Eastern Europe towards the northern slopes of the Alps. The control simulations represent the synoptic evolution of the event reasonably well. The evolution of the precipitation event in the control simulations shows some differences in terms of its spatial and temporal characteristics compared to observations. The main precipitation event can be separated into two phases concerning the moisture sources. Our modelling results provide evidence that the two main sources contributing to the event were the continental evapotranspiration (moisture recycling; both phases) and the North Atlantic Ocean (first phase only). The Mediterranean Sea played only a minor role as a moisture source. This study confirms the importance of continental moisture recycling for heavy precipitation events over Central Europe during the summer half year.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Regional Climate Model version 3 (RegCM3) simulations of 17 summers (1988-2004) over part of South America south of 5 degrees S were evaluated to identify model systematic errors. Model results were compared to different rainfall data sets (Climate Research Unit (CRU), Climate Prediction Center (CPC), Global Precipitation Climatology Project (GPCP), and National Centers for Environmental Prediction (NCEP) reanalysis), including the five summers mean (1998-2002) precipitation diurnal cycle observed by the Tropical Rainfall Measuring Mission (TRMM)-Precipitation Radar (PR). In spite of regional differences, the RegCM3 simulates the main observed aspects of summer climatology associated with the precipitation (northwest-southeast band of South Atlantic Convergence Zone (SACZ)) and air temperature (warmer air in the central part of the continent and colder in eastern Brazil and the Andes Mountains). At a regional scale, the main RegCM3 failures are the underestimation of the precipitation in the northern branch of the SACZ and some unrealistic intense precipitation around the Andes Mountains. However, the RegCM3 seasonal precipitation is closer to the fine-scale analyses (CPC, CRU, and TRMM-PR) than is the NCEP reanalysis, which presents an incorrect north-south orientation of SACZ and an overestimation of its intensity. The precipitation diurnal cycle observed by TRMM-PR shows pronounced contrasts between Tropics and Extratropics and land and ocean, where most of these features are simulated by RegCM3. The major similarities between the simulation and observation, especially the diurnal cycle phase, are found over the continental tropical and subtropical SACZ regions, which present afternoon maximum (1500-1800 UTC) and morning minimum (0900-1200 UTC). More specifically, over the core of SACZ, the phase and amplitude of the simulated precipitation diurnal cycle are very close to the TRMM-PR observations. Although there are amplitude differences, the RegCM3 simulates the observed nighttime rainfall in the eastern Andes Mountains, over the Atlantic Ocean, and also over northern Argentina. The main simulation deficiencies are found in the Atlantic Ocean and near the Andes Mountains. Over the Atlantic Ocean the convective scheme is not triggered; thus the rainfall arises from the grid-scale scheme and therefore differs from the TRMM-PR. Near the Andes, intense (nighttime and daytime) simulated precipitation could be a response of an incorrect circulation and topographic uplift. Finally, it is important to note that unlike most reported bias of global models, RegCM3 does not trigger the moist convection just after sunrise over the southern part of the Amazon.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Clustering quality or validation indices allow the evaluation of the quality of clustering in order to support the selection of a specific partition or clustering structure in its natural unsupervised environment, where the real solution is unknown or not available. In this paper, we investigate the use of quality indices mostly based on the concepts of clusters` compactness and separation, for the evaluation of clustering results (partitions in particular). This work intends to offer a general perspective regarding the appropriate use of quality indices for the purpose of clustering evaluation. After presenting some commonly used indices, as well as indices recently proposed in the literature, key issues regarding the practical use of quality indices are addressed. A general methodological approach is presented which considers the identification of appropriate indices thresholds. This general approach is compared with the simple use of quality indices for evaluating a clustering solution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present a finite difference method for solving two-dimensional viscoelastic unsteady free surface flows governed by the single equation version of the eXtended Pom-Pom (XPP) model. The momentum equations are solved by a projection method which uncouples the velocity and pressure fields. We are interested in low Reynolds number flows and, to enhance the stability of the numerical method, an implicit technique for computing the pressure condition on the free surface is employed. This strategy is invoked to solve the governing equations within a Marker-and-Cell type approach while simultaneously calculating the correct normal stress condition on the free surface. The numerical code is validated by performing mesh refinement on a two-dimensional channel flow. Numerical results include an investigation of the influence of the parameters of the XPP equation on the extrudate swelling ratio and the simulation of the Barus effect for XPP fluids. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Foundries can be found all over Brazil and they are very important to its economy. In 2008, a mixed integer-programming model for small market-driven foundries was published, attempting to minimize delivery delays. We undertook a study of that model. Here, we present a new approach based on the decomposition of the problem into two sub-problems: production planning of alloys and production planning of items. Both sub-problems are solved using a Lagrangian heuristic based on transferences. An important aspect of the proposed heuristic is its ability to take into account a secondary practice objective solution: the furnace waste. Computational tests show that the approach proposed here is able to generate good quality solutions that outperform prior results. Journal of the Operational Research Society (2010) 61, 108-114. doi:10.1057/jors.2008.151

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work deals with the development of a numerical technique for simulating three-dimensional viscoelastic free surface flows using the PTT (Phan-Thien-Tanner) nonlinear constitutive equation. In particular, we are interested in flows possessing moving free surfaces. The equations describing the numerical technique are solved by the finite difference method on a staggered grid. The fluid is modelled by a Marker-and-Cell type method and an accurate representation of the fluid surface is employed. The full free surface stress conditions are considered. The PTT equation is solved by a high order method, which requires the calculation of the extra-stress tensor on the mesh contours. To validate the numerical technique developed in this work flow predictions for fully developed pipe flow are compared with an analytic solution from the literature. Then, results of complex free surface flows using the FIT equation such as the transient extrudate swell problem and a jet flowing onto a rigid plate are presented. An investigation of the effects of the parameters epsilon and xi on the extrudate swell and jet buckling problems is reported. (C) 2010 Elsevier B.V. All rights reserved.