502 resultados para BENCHMARKS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this action research study of my classroom of 7th grade students, enrolled in Pre- Algebra (an 8th grade course), I investigated: rate of homework completion when not taken as part of the academic grade, cognizant self-assessment and its affect on mastery of objectives, and use of self-assessment to guide instruction and re-teaching of classroom objectives. I learned that without sufficient accountability homework completion rates drop with time. Similarly, students can be overconfident in their abilities but unmoved when their summative reports do not match their initial perceived formative benchmarks. Finally, due in part to our society’s reactive nature; students find it more practical to play catch-up rather than staying caught up. As a result of this research, I plan to create, with the help of the students, an accountability statute to help students stay caught up with their understanding of the objectives, as well as allow additional time and energy spent by both student and teacher to react in a timely manner to complete student knowledge within a day or two rather than a week or two later.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resistance of progenies of cacao to Ceratocystis wilt Seedlings from open-pollinated progenies of 20 clones of cocoa (Theobroma cacao) were inoculated with the fungus Ceratocystis cacaofunesta, the causal agent of Ceratocystis wilt, and their response was assessed based on the percentage of dead plants. Open pollinated progeny of clones TSH1188 and VB1151 were used as standards for resistance, while CCN51 and SJ02 for susceptibility. Contrasts between these benchmarks and the progenies studied were estimated and evaluated by Dunnett's t test (alpha = 0.05). The progenies showed different responses to C. cacaofunesta, and it was possible to classify them into three groups: resistant (FCB01, CSG70, BOBA01, VB902, TSH1188, VB1151, PS1319 and MAC01), moderately susceptible (HW25, PM02, FA13, PH15, M05 and BJ11) and susceptible (CCN51, FB206, PH16, SJ02, CCN10 and FSU77).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes a computational tool to assist power system engineers in the field tuning of power system stabilizers (PSSs) and Automatic Voltage Regulators (AVRs). The outcome of this tool is a range of gain values for theses controllers within which there is a theoretical guarantee of stability for the closed-loop system. This range is given as a set of limit values for the static gains of the controllers of interest, in such a way that the engineer responsible for the field tuning of PSSs and/or AVRs can be confident with respect to system stability when adjusting the corresponding static gains within this range. This feature of the proposed tool is highly desirable from a practical viewpoint, since the PSS and AVR commissioning stage always involve some readjustment of the controller gains to account for the differences between the nominal model and the actual behavior of the system. By capturing these differences as uncertainties in the model, this computational tool is able to guarantee stability for the whole uncertain model using an approach based on linear matrix inequalities. It is also important to remark that the tool proposed in this paper can also be applied to other types of parameters of either PSSs or Power Oscillation Dampers, as well as other types of controllers (such as speed governors, for example). To show its effectiveness, applications of the proposed tool to two benchmarks for small signal stability studies are presented at the end of this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] La vectorización es un proceso de explotación de paralelismo de datos muy potente que, bien usado permite obtener un mejor rendimiento de la ejecución de las aplicaciones. Debido a ello, hoy en día muchos procesadores incluyen extensiones vectoriales en su repositorio de instrucciones. Para las máquinas basadas en estos procesadores, existen multitud de compiladores que permiten explotar la vectorización. Sin embargo, no todas las aplicaciones experimentan una mejora en el rendimiento cuando son vectorizadas, y no todos los compiladores son capaces de extraer el mismo rendimiento vectorial de las aplicaciones. Este trabajo presenta un estudio exhaustivo del rendimiento de diversas aplicaciones numéricas, con el objetivo de determinar el grado de utilización efectiva de la unidad vectorial. Tras seleccionar los benchmarks Polyhedron, Mantevo, Sequoia, SPECfp y NPB, se compilaron activando la vectorización y se simularon en una versión modificada del simulador de cache CMPSim, enriquecida con un núcleo basado en el coprocesador Intel Xeon Phitm. En aquellos casos en que la utilización era baja, se realizó un diagnóstico a nivel de software de la fuente del problema y se propusieron mejoras que podrían aumentar el uso efectivo de la unidad vectorial. Para aquellas aplicaciones limitadas por memoria, se realizó un diagnóstico a nivel de hardware con el fin de determinar hasta que punto el diseño de la máquina repercute en el rendimiento de la aplicación en casos de buen uso de la unidad vectorial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La studio dell’efficienza di un indice azionario ha accresciuto la propria importanza nell’industria dell’asset management a seguito della diffusione dell’utilizzo di benchmark e investimenti indicizzati. Il presente lavoro valuta il livello di efficienza dei principali indici del mercato azionario statunitense, dell’Area Euro e italiano. Lo studio empirico ricorre a quattro misure di efficienza: il GRS, un test small-sample multivariato fondato sul CAPM; il test large sample di Wald, implementato tramite una simulazione bootstrap; il test GMM, che è stato applicato in una cornice non-gaussiana attraverso una simulazione block bootstrap; la misura di efficienza relativa di Kandel e Stambaugh. I risultati empirici forniscono una prova evidente della superiore efficienza degli indici equiponderati. Questa conclusione è interpretata sulla base della letteratura scientifica esistente, analizzando le diverse cause di ordine teorico ed empirico che sono state proposte.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.