973 resultados para cost prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genomewide marker information can improve the reliability of breeding value predictions for young selection candidates in genomic selection. However, the cost of genotyping limits its use to elite animals, and how such selective genotyping affects predictive ability of genomic selection models is an open question. We performed a simulation study to evaluate the quality of breeding value predictions for selection candidates based on different selective genotyping strategies in a population undergoing selection. The genome consisted of 10 chromosomes of 100 cM each. After 5,000 generations of random mating with a population size of 100 (50 males and 50 females), generation G(0) (reference population) was produced via a full factorial mating between the 50 males and 50 females from generation 5,000. Different levels of selection intensities (animals with the largest yield deviation value) in G(0) or random sampling (no selection) were used to produce offspring of G(0) generation (G(1)). Five genotyping strategies were used to choose 500 animals in G(0) to be genotyped: 1) Random: randomly selected animals, 2) Top: animals with largest yield deviation values, 3) Bottom: animals with lowest yield deviations values, 4) Extreme: animals with the 250 largest and the 250 lowest yield deviations values, and 5) Less Related: less genetically related animals. The number of individuals in G(0) and G(1) was fixed at 2,500 each, and different levels of heritability were considered (0.10, 0.25, and 0.50). Additionally, all 5 selective genotyping strategies (Random, Top, Bottom, Extreme, and Less Related) were applied to an indicator trait in generation G(0), and the results were evaluated for the target trait in generation G(1), with the genetic correlation between the 2 traits set to 0.50. The 5 genotyping strategies applied to individuals in G(0) (reference population) were compared in terms of their ability to predict the genetic values of the animals in G(1) (selection candidates). Lower correlations between genomic-based estimates of breeding values (GEBV) and true breeding values (TBV) were obtained when using the Bottom strategy. For Random, Extreme, and Less Related strategies, the correlation between GEBV and TBV became slightly larger as selection intensity decreased and was largest when no selection occurred. These 3 strategies were better than the Top approach. In addition, the Extreme, Random, and Less Related strategies had smaller predictive mean squared errors (PMSE) followed by the Top and Bottom methods. Overall, the Extreme genotyping strategy led to the best predictive ability of breeding values, indicating that animals with extreme yield deviations values in a reference population are the most informative when training genomic selection models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The objective was to present a new ovarian response prediction index (ORPI), which was based on anti-Mullerian hormone (AMH) levels, antral follicle count (AFC) and age, and to verify whether it could be a reliable predictor of the ovarian stimulation response.Methods: A total of 101 patients enrolled in the ICSI programme were included. The ORPI values were calculated by multiplying the AMH level (ng/ml) by the number of antral follicles (2-9 mm), and the result was divided by the age (years) of the patient (ORPI=(AMH x AFC)/Patient age).Results: The regression analysis demonstrated significant (P<0.0001) positive correlations between the ORPI and the total number of oocytes and of MII oocytes collected. The logistic regression revealed that the ORPI values were significantly associated with the likelihood of pregnancy (odds ratio (OR): 1.86; P=0.006) and collecting greater than or equal to 4 oocytes (OR: 49.25; P<0.0001), greater than or equal to 4 MII oocytes (OR: 6.26; P<0.0001) and greater than or equal to 15 oocytes (OR: 6.10; P<0.0001). Regarding the probability of collecting greater than or equal to 4 oocytes according to the ORPI value, the ROC curve showed an area under the curve (AUC) of 0.91 and an efficacy of 88% at a cut-off of 0.2. In relation to the probability of collecting greater than or equal to 4 MII oocytes according to the ORPI value, the ROC curve had an AUC of 0.84 and an efficacy of 81% at a cut-off of 0.3. The ROC curve for the probability of collecting greater than or equal to 15 oocytes resulted in an AUC of 0.89 and an efficacy of 82% at a cut-off of 0.9. Finally, regarding the probability of pregnancy occurrence according to the ORPI value, the ROC curve showed an AUC of 0.74 and an efficacy of 62% at a cut-off of 0.3.Conclusions: The ORPI exhibited an excellent ability to predict a low ovarian response and a good ability to predict a collection of greater than or equal to 4 MII oocytes, an excessive ovarian response and the occurrence of pregnancy in infertile women. The ORPI might be used to improve the cost-benefit ratio of ovarian stimulation regimens by guiding the selection of medications and by modulating the doses and regimens according to the actual needs of the patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquafeed production faces global issues related to availability of feed ingredients. Feed manufacturers require greater flexibility in order to develop nutritional and cost-effective formulations that take into account nutrient content and availability of ingredients. The search for appropriate ingredients requires detailed screening of their potential nutritional value and variability at the industrial level. In vitro digestion of feedstuffs by enzymes extracted from the target species has been correlated with apparent protein digestibility (APD) in fish and shrimp species. The present study verified the relationship between APD and in vitro degree of protein hydrolysis (DH) with Litopenaeus vannamei hepatopancreas enzymes in several different ingredients (n = 26): blood meals, casein, corn gluten meal, crab meal, distiller`s dried grains with solubles, feather meal, fish meals, gelatin, krill meals, poultry by-product meal, soybean meals, squid meals and wheat gluten. The relationship between APD and DH was further verified in diets formulated with these ingredients at 30% inclusion into a reference diet. APD was determined in vivo (30.1 +/- 0.5 degrees C, 32.2 +/- 0.4%.) with juvenile L vannamei (9 to 12 g) after placement of test ingredients into a reference diet (35 g kg(-1) CP: 8.03 g kg(-1) lipid; 2.01 kcal g(-1)) with chromic oxide as the inert marker. In vitro DH was assessed in ingredients and diets with standardized hepatopancreas enzymes extracted from pond-reared shrimp. The DH of ingredients was determined under different assay conditions to check for the most suitable in vitro protocol for APD prediction: different batches of enzyme extracts (HPf5 or HPf6), temperatures (25 or 30 degrees C) and enzyme activity (azocasein): crude protein ratios (4 U: 80 mg CP or 4 U: 40 mg CP). DH was not affected by ingredient proximate composition. APD was significantly correlated to DH in regressions considering either ingredients or diets. The relationships between APD and DH of the ingredients could be suitably adjusted to a Rational Function (y = (a + bx)/(1 + cx + dx2), n = 26. Best in vitro APD predictions were obtained at 25 degrees C, 4 U: 80 mg CP both for ingredients (R(2) = 0.86: P = 0.001) and test diets (R(2) = 0.96; P = 0.007). The regression model including all 26 ingredients generated higher prediction residuals (i.e., predicted APD - determined APD) for corn gluten meal, feather meal. poultry by-product meal and krill flour. The remaining test ingredients presented mean prediction residuals of 3.5 points. A model including only ingredients with APD>80% showed higher prediction precision (R(2) = 0.98: P = 0.000004; n = 20) with average residual of 1.8 points. Predictive models including only ingredients from the same origin (e.g., marine-based, R(2) = 0.98; P = 0.033) also displayed low residuals. Since in vitro techniques have been usually validated through regressions against in vivo APD, the DH predictive capacity may depend on the consistency of the in vivo methodology. Regressions between APD and DH suggested a close relationship between peptide bond breakage by hepatopancreas digestive proteases and the apparent nitrogen assimilation in shrimp, and this may be a useful tool to provide rapid nutritional information. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In protein databases there is a substantial number of proteins structurally determined but without function annotation. Understanding the relationship between function and structure can be useful to predict function on a large scale. We have analyzed the similarities in global physicochemical parameters for a set of enzymes which were classified according to the four Enzyme Commission (EC) hierarchical levels. Using relevance theory we introduced a distance between proteins in the space of physicochemical characteristics. This was done by minimizing a cost function of the metric tensor built to reflect the EC classification system. Using an unsupervised clustering method on a set of 1025 enzymes, we obtained no relevant clustering formation compatible with EC classification. The distance distributions between enzymes from the same EC group and from different EC groups were compared by histograms. Such analysis was also performed using sequence alignment similarity as a distance. Our results suggest that global structure parameters are not sufficient to segregate enzymes according to EC hierarchy. This indicates that features essential for function are rather local than global. Consequently, methods for predicting function based on global attributes should not obtain high accuracy in main EC classes prediction without relying on similarities between enzymes from training and validation datasets. Furthermore, these results are consistent with a substantial number of studies suggesting that function evolves fundamentally by recruitment, i.e., a same protein motif or fold can be used to perform different enzymatic functions and a few specific amino acids (AAs) are actually responsible for enzyme activity. These essential amino acids should belong to active sites and an effective method for predicting function should be able to recognize them. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Computerizd tomography (CT) is the gold standard for the evaluation of intra- (IAF) and total (TAF) abdominal fat; however, the high cost of the procedure and exposure to radiation limit its routine use. Objective: To develop equations that utilize anthropometric measures for the estimate of IAF and TAF in obese women with polycystic ovary syndrome (PCOS). Methods: The weight, height, BMI, and abdominal (AC), waist (WC), chest (CC), and neck (NC) circumferences of thirty obese women with PCOS were measured, and their IAF and TAF were analyzed by CT. Results: The anthropometric variables AC, CC, and NC were chosen for the TAF linear regression model because they were better correlated with the fat deposited in this region. The model proposed for TAF (predicted) was: 4.63725 + 0.01483 x AC - 0.00117 x NC - 0.00177 x CC (R-2 = 0.78); and the model proposed for IAF was: IAF (predicted) = 1.88541 + 0.01878 x WC + 0.05687 x NC - 0.01529 x CC (R-2 = 0.51). AC was the only independent predictor of TAF (p < 0.01). Conclusion: The equations proposed showed good correlation with the real value measured by CT, and can be used in clinical practice. (Nutr Hosp. 2012;27:1662-1666) DOI:10.3305/nh.2012.27.5.5933

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Worldwide, 700,000 infants are infected annually by HIV-1, most of them in resource-limited settings. Care for these children requires simple, inexpensive tests. We have evaluated HIV-1 p24 antigen for antiretroviral treatment (ART) monitoring in children. p24 by boosted enzyme-linked immunosorbent assay of heated plasma and HIV-1 RNA were measured prospectively in 24 HIV-1-infected children receiving ART. p24 and HIV-1 RNA concentrations and their changes between consecutive visits were related to the respective CD4+ changes. Age at study entry was 7.6 years; follow-up was 47.2 months, yielding 18 visits at an interval of 2.8 months (medians). There were 399 complete visit data sets and 375 interval data sets. Controlling for variation between individuals, there was a positive relationship between concentrations of HIV-1 RNA and p24 (P < 0.0001). While controlling for initial CD4+ count, age, sex, days since start of ART, and days between visits, the relative change in CD4+ count between 2 successive visits was negatively related to the corresponding relative change in HIV-1 RNA (P = 0.009), but not to the initial HIV-1 RNA concentration (P = 0.94). Similarly, we found a negative relationship with the relative change in p24 over the interval (P < 0.0001), whereas the initial p24 concentration showed a trend (P = 0.08). Statistical support for the p24 model and the HIV-1 RNA model was similar. p24 may be an accurate low-cost alternative to monitor ART in pediatric HIV-1 infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Many preschool children have wheeze or cough, but only some have asthma later. Existing prediction tools are difficult to apply in clinical practice or exhibit methodological weaknesses. OBJECTIVE We sought to develop a simple and robust tool for predicting asthma at school age in preschool children with wheeze or cough. METHODS From a population-based cohort in Leicestershire, United Kingdom, we included 1- to 3-year-old subjects seeing a doctor for wheeze or cough and assessed the prevalence of asthma 5 years later. We considered only noninvasive predictors that are easy to assess in primary care: demographic and perinatal data, eczema, upper and lower respiratory tract symptoms, and family history of atopy. We developed a model using logistic regression, avoided overfitting with the least absolute shrinkage and selection operator penalty, and then simplified it to a practical tool. We performed internal validation and assessed its predictive performance using the scaled Brier score and the area under the receiver operating characteristic curve. RESULTS Of 1226 symptomatic children with follow-up information, 345 (28%) had asthma 5 years later. The tool consists of 10 predictors yielding a total score between 0 and 15: sex, age, wheeze without colds, wheeze frequency, activity disturbance, shortness of breath, exercise-related and aeroallergen-related wheeze/cough, eczema, and parental history of asthma/bronchitis. The scaled Brier scores for the internally validated model and tool were 0.20 and 0.16, and the areas under the receiver operating characteristic curves were 0.76 and 0.74, respectively. CONCLUSION This tool represents a simple, low-cost, and noninvasive method to predict the risk of later asthma in symptomatic preschool children, which is ready to be tested in other populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Payment Cost Minimization (PCM) auction has been proposed as an alternative to the Offer Cost Minimization (OCM) auction to be used in wholesale electric power markets with the intention to lower the procurement cost of electricity. Efficiency concerns about this proposal have relied on the assumption of true production cost revelation. Using an experimental approach, I compare the two auctions, strictly controlling for the level of unilateral market power. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices at which they are willing to sell, but also the start-up fees that are designed to reimburse the fixed start-up costs of the generation plants. I find that both auctions result in start-up fees that are significantly higher than the start-up costs. Overall, the two auctions perform similarly in terms of procurement cost and efficiency. Surprisingly, I do not find a substantial difference between less market power and more market power designs. Both designs result in similar inefficiencies and equally higher procurement costs over the competitive prediction. The PCM auction tends to have lower price volatility than the OCM auction when the market power is minimal but this property vanishes in the designs with market power. These findings lead me to conclude that both the PCM and the OCM auctions do not belong to the class of truth revealing mechanisms and do not easily elicit competitive behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel pedestrian motion prediction technique is presented in this paper. Its main achievement regards to none previous observation, any knowledge of pedestrian trajectories nor the existence of possible destinations is required; hence making it useful for autonomous surveillance applications. Prediction only requires initial position of the pedestrian and a 2D representation of the scenario as occupancy grid. First, it uses the Fast Marching Method (FMM) to calculate the pedestrian arrival time for each position in the map and then, the likelihood that the pedestrian reaches those positions is estimated. The technique has been tested with synthetic and real scenarios. In all cases, accurate probability maps as well as their representative graphs were obtained with low computational cost.