987 resultados para Benchmark results
Resumo:
This paper introduces a new version of the multiobjective Alliance Algorithm (MOAA) applied to the optimization of the NACA 0012 airfoil section, for minimization of drag and maximization of lift coefficients, based on eight section shape parameters. Two software packages are used: XFoil which evaluates each new candidate airfoil section in terms of its aerodynamic efficiency, and a Free-Form Deformation tool to manage the section geometry modifications. Two versions of the problem are formulated with different design variable bounds. The performance of this approach is compared, using two indicators and a statistical test, with that obtained using NSGA-II and multi-objective Tabu Search (MOTS) to guide the optimization. The results show that the MOAA outperforms MOTS and obtains comparable results with NSGA-II on the first problem, while in the other case NSGA-II is not able to find feasible solutions and the MOAA is able to outperform MOTS. © 2013 IEEE.
Resumo:
A new version of the Multi-objective Alliance Algorithm (MOAA) is described. The MOAA's performance is compared with that of NSGA-II using the epsilon and hypervolume indicators to evaluate the results. The benchmark functions chosen for the comparison are from the ZDT and DTLZ families and the main classical multi-objective (MO) problems. The results show that the new MOAA version is able to outperform NSGA-II on almost all the problems.
Resumo:
© 2015 IEEE.Although definition of single-program benchmarks is relatively straight-forward-a benchmark is a program plus a specific input-definition of multi-program benchmarks is more complex. Each program may have a different runtime and they may have different interactions depending on how they align with each other. While prior work has focused on sampling multiprogram benchmarks, little attention has been paid to defining the benchmarks in their entirety. In this work, we propose a four-tuple that formally defines multi-program benchmarks in a well-defined way. We then examine how four different classes of benchmarks created by varying the elements of this tuple align with real-world use-cases. We evaluate the impact of these variations on real hardware, and see drastic variations in results between different benchmarks constructed from the same programs. Notable differences include significant speedups versus slowdowns (e.g., +57% vs -5% or +26% vs -18%), and large differences in magnitude even when the results are in the same direction (e.g., 67% versus 11%).
Resumo:
Objectives: To investigate the quality of end-of-life care for patients with metastatic non-small cell lung cancer (NSCLC). Design and participants: Retrospective cohort study of patients from first hospitalisation for metastatic disease until death, using hospital, emergency department and death registration data from Victoria, Australia, between 1 July 2003 and 30 June 2010. Main outcome measures: Emergency department and hospital use; aggressiveness of care including intensive care and chemotherapy in last 30 days; palliative and supportive care provision; and place of death. Results: Metastatic NSCLC patients underwent limited aggressive treatment such as intensive care (5%) and chemotherapy (< 1%) at the end of life; however, high numbers died in acute hospitals (42%) and 61% had a length of stay of greater than 14 days in the last month of life. Although 62% were referred to palliative care services, this occurred late in the illness. In a logistic regression model adjusted for year of metastasis, age, sex, metastatic site and survival, the odds ratio (OR) of dying in an acute hospital bed compared with death at home or in a hospice unit decreased with receipt of palliative care (OR, 0.25; 95% CI, 0.21–0.30) and multimodality supportive care (OR, 0.65; 95% CI, 0.56–0.75). Conclusion: Because early palliative care for patients with metastatic NSCLC is recommended, we propose that this group be considered a benchmark of quality end-of-life care. Future work is required to determine appropriate quality-of-care targets in this and other cancer patient cohorts, with particular focus on the timeliness of palliative care engagement.
Resumo:
This paper is about a hierarchical structure with an event-based supervisor in a higher level and a fractional-order proportional integral (FOPI) in a lower level applied to a wind turbine. The event-based supervisor analyzes the operation conditions to determine the state of the wind turbine. This controller operate in the full load region and the main objective is to capture maximum power generation while ensuring the performance and reliability required for a wind turbine to be integrated into an electric grid. The main contribution focus on the use of fractional-order proportional integral controller which benefits from the introduction of one more tuning parameter, the integral fractional-order, taking advantage over integer order proportional integral (PI) controller. Comparisons between fractional-order pitch control and a default proportional integral pitch controller applied to a wind turbine benchmark are given and simulation results by Matlab/Simulink are shown in order to prove the effectiveness of the proposed approach.
Resumo:
This paper presents a comparison between proportional integral control approaches for variable speed wind turbines. Integer and fractional-order controllers are designed using linearized wind turbine model whilst fuzzy controller also takes into account system nonlinearities. These controllers operate in the full load region and the main objective is to extract maximum power from the wind turbine while ensuring the performance and reliability required to be integrated into an electric grid. The main contribution focuses on the use of fractional-order proportional integral (FOPI) controller which benefits from the introduction of one more tuning parameter, the integral fractional-order, taking advantage over integer order proportional integral (PI) controller. A comparison between proposed control approaches for the variable speed wind turbines is presented using a wind turbine benchmark model in the Matlab/Simulink environment. Results show that FOPI has improved system performance when compared with classical PI and fuzzy PI controller outperforms the integer and fractional-order control due to its capability to deal with system nonlinearities and uncertainties. © 2014 IEEE.
Resumo:
This project aims to prepare Worten Empresas (WE) fulfilling the increasing market demand through process changings, focusing on the Portuguese market, particularly on internal B2B clients1. Several methods were used to measure the current service level provided - process mapping, resources assessment, benchmark and a survey. The results were then used to compare against service level actually desired by WE’s customer, and then to identify the performance gaps in response times and quality of the follow-up during the sales process. To bridge the identified gaps, both a set of recommendations and an implementation plan were suggested to improve and monitor customer experience. This study concluded that it is possible to fulfill the increasing level of demand and at the same time improve customer satisfaction by implementing changes at the operations level.
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
The European Cancer Registry-based project on hematologic malignancies (HAEMACARE), set up to improve the availability and standardization of data on hematologic malignancies in Europe, used the European Cancer Registry-based project on survival and care of cancer patients (EUROCARE-4) database to produce a new grouping of hematologic neoplasma(defined by the International Classification of Diseases for Oncology, Third Edition and the 2001/2008 World Health Organization classifications) for epidemiological and public health purposes. We analyzed survival for lymphoid neoplasms in Europe by disease group, comparing survival between different European regions by age and sex. Design and Methods Incident neoplasms recorded between 1995 to 2002 in 48 population-based cancer registries in 20 countries participating in EUROCARE-4 were analyzed. The period approach was used to estimate 5-year relative survival rates for patients diagnosed in 2000-2002, who did not have 5 years of follow up. Results: The 5-year relative survival rate was 57% overall but varied markedly between the defined groups. Variation in survival within the groups was relatively limited across European regions and less than in previous years. Survival differences between men and women were small. The relative survival for patients with all lymphoid neoplasms decreased substantially after the age of 50. The proportion of ‘not otherwise specified’ diagnoses increased with advancing age.Conclusions: This is the first study to analyze survival of patients with lymphoid neoplasms, divided into groups characterized by similar epidemiological and clinical characteristics, providing a benchmark for more detailed analyses. This Europe-wide study suggests that previously noted differences in survival between regions have tended to decrease. The survival of patients with all neoplasms decreased markedly with age, while the proportion of ‘not otherwise specified’ diagnoses increased with advancing age. Thus the quality of diagnostic work-up and care decreased with age, suggesting that older patients may not be receiving optimal treatment
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
The purpose of this paper is to analyze the performance of the Histograms of Oriented Gradients (HOG) as descriptors for traffic signs recognition. The test dataset consists of speed limit traffic signs because of their high inter-class similarities. HOG features of speed limit signs, which were extracted from different traffic scenes, were computed and a Gentle AdaBoost classifier was invoked to evaluate the different features. The performance of HOG was tested with a dataset consisting of 1727 Swedish speed signs images. Different numbers of HOG features per descriptor, ranging from 36 features up 396 features, were computed for each traffic sign in the benchmark testing. The results show that HOG features perform high classification rate as the Gentle AdaBoost classification rate was 99.42%, and they are suitable to real time traffic sign recognition. However, it is found that changing the number of orientation bins has insignificant effect on the classification rate. In addition to this, HOG descriptors are not robust with respect to sign orientation.
Resumo:
Este trabalho estuda o diferencial de retorno entre fundos de ações com benchmark em índices de renda fixa e fundos de ações com benchmark em índices de renda variável. A escolha de um índice de renda fixa como benchmark para um FIA, em média tende a ser pior para o cotista, pois gera um potencial de ganho financeiro para o gestor não associado ao real valor por ele criado. Portanto, como a remuneração dos gestores através da taxa de performance depende em parte do benchmark escolhido, fundos com benchmark em renda fixa deveriam apresentar melhores desempenhos a fim de compensarem seus cotistas por este custo. Os resultados encontrados sugerem que os gestores de fundos com benchmark em renda fixa obtêm um retorno líquido de taxas de performance e administração superior para seus cotistas e também apresentam uma menor correlação com o Índice Bovespa.
Resumo:
The first LHC pp collisions at centre-of-mass energies of 0.9 and 2.36 TeV were recorded by the CMS detector in December 2009. The trajectories of charged particles produced in the collisions were reconstructed using the all-silicon Tracker and their momenta were measured in the 3.8 T axial magnetic field. Results from the Tracker commissioning are presented including studies of timing, efficiency, signal-to-noise, resolution, and ionization energy. Reconstructed tracks are used to benchmark the performance in terms of track and vertex resolutions, reconstruction of decays, estimation of ionization energy loss, as well as identification of photon conversions, nuclear interactions, and heavy-flavour decays.
Resumo:
Open surgical repair of complex abdominal aortic aneurysms requires more extensive dissection and aortic clamping above the renal or mesenteric arteries. Although results of open surgical series have shown variation, morbidity and mortality is higher compared with infrarenal aortic aneurysm repair. Potential complications include renal insufficiency, mesenteric ischemia, multisystem organ failure, and death. Although endovascular treatment with fenestrated and branched endografts might potentially decrease the risk of complications and mortality, its role is not yet defined and the technology is not widely available. Issues related to durability of the procedure and secondary interventions might limit its application to patients with higher risk or those with hostile anatomy. This article summarizes the clinical results of open surgical repair of pararenal abdominal aortic aneurysms to provide a benchmark for comparison with results of endovascular treatment, using fenestrated and branched techniques. © Annals of Vascular Surgery Inc.