951 resultados para Worst-case execution-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global competitiveness has been increased significantly in the last decade and, as consequence, companies are always looking for developing better processes in supply chain operations in order to maintain their competitive costs and keep themselves in the business. Logistics operations represent a large part of the product's final cost. Transportation can represent more than fifty percent of final cost sometimes. The solutions for cutting and packing problems consist in simple and low investment actions, as enhancing the arrangement of the transported load in order to decrease both material and room wastes. As per the presented reasons, the objective of this paper is to show and analyze a real application of a mathematical model to solve a manufacturer pallet-loading problem, comparing results from the model execution and the solution proposed by the company studied. This study will not only find the best arrangement to load pallets (which will optimize storage and transportation process), but also to check the effectiveness of existing modeling in the literature. For this study a computational package was used, which consists of a modeling language GAMS with the CPLEX optimization solver and two other existing software in the market, all of them indicating that an accurate mathematical model for solving this kind of problem in a two-dimensional approach is difficult to be found, in addition to a long execution time. However, the study and the software utilization indicate that the problem would be easily solved by heuristics in a shorter execution time

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In worldwide aviation operations, bird collisions with aircraft and ingestions into engine inlets present safety hazards and financial loss through equipment damage, loss of service and disruption to operations. The problem is encountered by all types of aircraft, both military and commercial. Modern aircraft engines have achieved a high level of reliability while manufacturers and users continually strive to further improve the safety record. A major safety concern today includes common-cause events which involve significant power loss on more than one engine. These are externally-inflicted occurrences, with the most frequent being encounters with flocks of birds. Most frequently these encounters occur during flight operations in the area on or near airports, near the ground instead of at cruise altitude conditions. This paper focuses on the increasing threat to aircraft and engines posed by the recorded growth in geese populations in North America. Service data show that goose strikes are increasing, especially in North America, consistent with the growing resident geese populations estimated by the United States Department of Agriculture (USDA). Airport managers, along with the governmental authorities, need to develop a strategy to address this large flocking bird issue. This paper also presents statistics on the overall status of the bird threat for birds of all sizes in North America relative to other geographic regions. Overall, the data shows that Canada and the USA have had marked improvements in controlling the threat from damaging birds - except for the increase in geese strikes. To reduce bird ingestion hazards, more aggressive corrective measures are needed in international air transport to reduce the chances of serious incidents or accidents from bird ingestion encounters. Air transport authorities must continue to take preventative and avoidance actions to counter the threat of birdstrikes to aircraft. The primary objective of this paper is to increase awareness of, and focus attention on, the safety hazards presented by large flocking birds such as geese. In the worst case, multiple engine power loss due to large bird ingestion could result in an off-airport forced landing accident. Hopefully, such awareness will prompt governmental regulatory agencies to address the hazards associated with growing populations of geese in North America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global competitiveness has been increased significantly in the last decade and, as consequence, companies are always looking for developing better processes in supply chain operations in order to maintain their competitive costs and keep themselves in the business. Logistics operations represent a large part of the product's final cost. Transportation can represent more than fifty percent of final cost sometimes. The solutions for cutting and packing problems consist in simple and low investment actions, as enhancing the arrangement of the transported load in order to decrease both material and room wastes. As per the presented reasons, the objective of this paper is to show and analyze a real application of a mathematical model to solve a manufacturer pallet-loading problem, comparing results from the model execution and the solution proposed by the company studied. This study will not only find the best arrangement to load pallets (which will optimize storage and transportation process), but also to check the effectiveness of existing modeling in the literature. For this study a computational package was used, which consists of a modeling language GAMS with the CPLEX optimization solver and two other existing software in the market, all of them indicating that an accurate mathematical model for solving this kind of problem in a two-dimensional approach is difficult to be found, in addition to a long execution time. However, the study and the software utilization indicate that the problem would be easily solved by heuristics in a shorter execution time

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several techniques to characterize the elastic modulus of wood and those currently using the natural frequencies of vibration stand out as they are non-destructive techniques, producing results that can be repeated and compared over time. This study reports on the effectiveness of the testing methods based on the natural frequencies of vibration versus static bending to obtain the elastic properties of reforested structural wood components usually employed in civil construction. The following components were evaluated: 24 beams of Eucalyptus sp. with nominal dimensions (40 x 60 x 2.000 mm) and 14 beams of Pinus oocarpa with nominal dimensions (45 x 90 x 2.300 mm) both without treatment; 30 boards with nominal dimensions (40 x 240 x 2.010 mm) and 30 boards with nominal dimensions (40 x 240 x 3.050 mm), both of Pinus oocarpa and with chromate copper arsenate (CCA) preservative treatment. The results obtained in thiswork show good correlation when compared to the results obtained by the static bending mechanical method, especially when applying the natural frequency of longitudinal vibration. The use of longitudinal frequency was reliable and practical, therefore recommended for determining the modulus of elasticity of wood structural elements. It was also found that no specific support is needed for the specimens using the longitudinal frequency, as well as no previous calibrations, reducing the execution time and enabling to test many samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Once multi-relational approach has emerged as an alternative for analyzing structured data such as relational databases, since they allow applying data mining in multiple tables directly, thus avoiding expensive joining operations and semantic losses, this work proposes an algorithm with multi-relational approach. Methods Aiming to compare traditional approach performance and multi-relational for mining association rules, this paper discusses an empirical study between PatriciaMine - an traditional algorithm - and its corresponding multi-relational proposed, MR-Radix. Results This work showed advantages of the multi-relational approach in performance over several tables, which avoids the high cost for joining operations from multiple tables and semantic losses. The performance provided by the algorithm MR-Radix shows faster than PatriciaMine, despite handling complex multi-relational patterns. The utilized memory indicates a more conservative growth curve for MR-Radix than PatriciaMine, which shows the increase in demand of frequent items in MR-Radix does not result in a significant growth of utilized memory like in PatriciaMine. Conclusion The comparative study between PatriciaMine and MR-Radix confirmed efficacy of the multi-relational approach in data mining process both in terms of execution time and in relation to memory usage. Besides that, the multi-relational proposed algorithm, unlike other algorithms of this approach, is efficient for use in large relational databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] La realización de los nuevos estudios de gestión de la demanda requiere nuevas aproximaciones en las que la red eléctrica se analiza como un sistema complejo. Estos están formados por un gran número de entidades fuertemente enlazadas entre sí. Se afronta el reto de añadir la capacidad de interacción sobre una simulación de sistemas complejos en tiempo de ejecución. Pero, ¿Cómo abordar la representación de un sistema complejo de tal manera que sea fácilmente gestionable por una persona?, o ¿Cómo ofrecer una manera sencilla de alterar la simulación?. Con esta idea nace Simulation Gateway Interface, un framework que permite hacer accesibles las simulaciones a través de una interfaz gráfica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]When analysing the seismic response of pile groups, a vertically-incident wavefiel is usually employed even though it doesnot necessarily correspond to the worst case scenario. This work aims to study the influence of both type of seismic body wave and its angle of incidence on the dynamic response of pile foundations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid vehicles represent the future for automakers, since they allow to improve the fuel economy and to reduce the pollutant emissions. A key component of the hybrid powertrain is the Energy Storage System, that determines the ability of the vehicle to store and reuse energy. Though electrified Energy Storage Systems (ESS), based on batteries and ultracapacitors, are a proven technology, Alternative Energy Storage Systems (AESS), based on mechanical, hydraulic and pneumatic devices, are gaining interest because they give the possibility of realizing low-cost mild-hybrid vehicles. Currently, most literature of design methodologies focuses on electric ESS, which are not suitable for AESS design. In this contest, The Ohio State University has developed an Alternative Energy Storage System design methodology. This work focuses on the development of driving cycle analysis methodology that is a key component of Alternative Energy Storage System design procedure. The proposed methodology is based on a statistical approach to analyzing driving schedules that represent the vehicle typical use. Driving data are broken up into power events sequence, namely traction and braking events, and for each of them, energy-related and dynamic metrics are calculated. By means of a clustering process and statistical synthesis methods, statistically-relevant metrics are determined. These metrics define cycle representative braking events. By using these events as inputs for the Alternative Energy Storage System design methodology, different system designs are obtained. Each of them is characterized by attributes, namely system volume and weight. In the last part the work, the designs are evaluated in simulation by introducing and calculating a metric related to the energy conversion efficiency. Finally, the designs are compared accounting for attributes and efficiency values. In order to automate the driving data extraction and synthesis process, a specific script Matlab based has been developed. Results show that the driving cycle analysis methodology, based on the statistical approach, allows to extract and synthesize cycle representative data. The designs based on cycle statistically-relevant metrics are properly sized and have satisfying efficiency values with respect to the expectations. An exception is the design based on the cycle worst-case scenario, corresponding to same approach adopted by the conventional electric ESS design methodologies. In this case, a heavy system with poor efficiency is produced. The proposed new methodology seems to be a valid and consistent support for Alternative Energy Storage System design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il basso tasso d'usura e l'alta resistenza meccanica dell'UHMWPE reticolato e additivato ha posto l'attenzione verso l'uso di teste femorali con diametri maggiori per diminuire il rischio d'impingement e dislocazioni. Questo richiede l'utilizzo d'inserti acetabolari più sottili di quelli attualmente in commercio. In quest'ottica è necessario porre particolare attenzione alla resistenza meccanica d'inserti più sottili, e all'efficacia della vitamina E nel combattere l'effetto dell'ossidazione che si manifesta in seguito al processo di reticolazione. Lo scopo del lavoro è quindi di studiare un inserto più sottile di quelli attualmente in commercio per verificarne le performance. Tale studio è svolto su una serie di taglie (compreso inserto prodotto ad-hoc con spessore di 3,6 mm) con spessore di 5,6 mm e di 3,6 mm dalle quali viene isolato il worst-case tramite analisi FEM. Con prove sperimentali è testata la resistenza meccanica del worst-case, e sono monitorate le deformazioni subite e l'ossidazione del campione. Dagli studi FEM è risultato che le tensioni sono mediamente le stesse in tutti i campioni, anche se si sono registrate tensioni leggermente superiori nella taglia intermedia. A differenza delle attese la taglia in cui si sono riscontrate le tensioni massime è la F (non è l'inserto che ha diametro inferiore). A seguito della messa a punto del modello FEM si è identificato un valore d'attrito inferiore a quello atteso. In letteratura i valori d'attrito coppa-inserto sono più grandi del valore che si è identificato tramite simulazioni FEM. Sulla base dei risultati FEM è isolato il worst-case che viene quindi sottoposto a un test dinamico con 6 milioni di cicli atto a valutarne le performance. Gli inserti di spessore ridotto non hanno riportato alcun danno visibile, e la loro integrità strutturale non è stata modificata. Le considerazioni preliminari sono confermate dalla verifica al tastatore meccanico e dall'analisi chimica, dalle quale non si sono evidenziate particolari problematiche. Infatti, da queste verifiche si rileva che l'effetto del creep nella prova accelerata è pressoché trascurabile, e non si riscontrano variazioni dimensionali rilevanti. Anche dall'analisi chimica dei campioni non si evidenzia ossidazione. I valori d'ossidazione dell'inserto testato sono analoghi a quelli del campione non testato, anche quando viene confrontato con l'inserto in UHMWPE vergine si evidenzia un'ossidazione di molto superiore. Questo prova che la vitamina E inibisce i radicali liberi che quindi non causano l'ossidazione con susseguente fallimento dell'inserto. Dai risultati si vede che i campioni non subiscono danni rilevanti, le deformazioni elastiche monitorate nel test dinamico sono pressoché nulle, come gli effetti del creep misurati analizzando i dati ottenuti al tastatore meccanico. Grazie alla presenza della vitamina E non si ha ossidazione, quella rilevata è vicina a zero ed è da imputare alla lavorazione meccanica. Secondo tali considerazioni è possibile affermare che la riduzione dello spessore degli inserti da 5,6 mm a 3,6 mm non ha conseguenze critiche sul loro comportamento, e non comporta un fallimento del dispositivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).