946 resultados para incremental computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The determination of characteristic cardiac parameters, such as displacement, stress and strain distribution are essential for an understanding of the mechanics of the heart. The calculation of these parameters has been limited until recently by the use of idealised mathematical representations of biventricular geometries and by applying simple material laws. On the basis of 20 short axis heart slices and in consideration of linear and nonlinear material behaviour we have developed a FE model with about 100,000 degrees of freedom. Marching Cubes and Phong's incremental shading technique were used to visualise the three dimensional geometry. In a quasistatic FE analysis continuous distribution of regional stress and strain corresponding to the endsystolic state were calculated. Substantial regional variation of the Von Mises stress and the total strain energy were observed at all levels of the heart model. The results of both the linear elastic model and the model with a nonlinear material description (Mooney-Rivlin) were compared. While the stress distribution and peak stress values were found to be comparable, the displacement vectors obtained with the nonlinear model were generally higher in comparison with the linear elastic case indicating the need to include nonlinear effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an elementary theory of wars fought by fully rational contenders. Two parties play a Markov game that combines stages of bargaining with stages where one side has the ability to impose surrender on the other. Under uncertainty and incomplete information, in the unique equilibrium of the game, long confrontations occur: war arises when reality disappoints initial (rational) optimism, and it persist longer when both agents are optimists but reality proves both wrong. Bargaining proposals that are rejected initially might eventually be accepted after several periods of confrontation. We provide an explicit computation of the equilibrium, evaluating the probability of war, and its expected losses as a function of i) the costs of confrontation, ii) the asymmetry of the split imposed under surrender, and iii) the strengths of contenders at attack and defense. Changes in these parameters display non-monotonic effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes full-Bayes priors for time-varying parameter vector autoregressions (TVP-VARs) which are more robust and objective than existing choices proposed in the literature. We formulate the priors in a way that they allow for straightforward posterior computation, they require minimal input by the user, and they result in shrinkage posterior representations, thus, making them appropriate for models of large dimensions. A comprehensive forecasting exercise involving TVP-VARs of different dimensions establishes the usefulness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo analiza el rendimiento de cuatro nodos de cómputo multiprocesador de memoria compartida para resolver el problema N-body. Se paraleliza el algoritmo serie, y se codifica usando el lenguaje C extendido con OpenMP. El resultado son dos variantes que obedecen a dos criterios de optimización diferentes: minimizar los requisitos de memoria y minimizar el volumen de cómputo. Posteriormente, se realiza un proceso de análisis de las prestaciones del programa sobre los nodos de cómputo. Se modela el rendimiento de las variantes secuenciales y paralelas de la aplicación, y de los nodos de cómputo; se instrumentan y ejecutan los programas para obtener resultados en forma de varias métricas; finalmente se muestran e interpretan los resultados, proporcionando claves que explican ineficiencias y cuellos de botella en el rendimiento y posibles líneas de mejora. La experiencia de este estudio concreto ha permitido esbozar una incipiente metodología de análisis de rendimiento, identificación de problemas y sintonización de algoritmos a nodos de cómputo multiprocesador de memoria compartida.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: This study investigated maximal cardiometabolic response while running in a lower body positive pressure treadmill (antigravity treadmill (AG)), which reduces body weight (BW) and impact. The AG is used in rehabilitation of injuries but could have potential for high-speed running, if workload is maximally elevated. METHODS: Fourteen trained (nine male) runners (age 27 ± 5 yr; 10-km personal best, 38.1 ± 1.1 min) completed a treadmill incremental test (CON) to measure aerobic capacity and heart rate (V˙O2max and HRmax). They completed four identical tests (48 h apart, randomized order) on the AG at BW of 100%, 95%, 90%, and 85% (AG100 to AG85). Stride length and rate were measured at peak velocities (Vpeak). RESULTS: V˙O2max (mL·kg·min) was similar across all conditions (men: CON = 66.6 (3.0), AG100 = 65.6 (3.8), AG95 = 65.0 (5.4), AG90 = 65.6 (4.5), and AG85 = 65.0 (4.8); women: CON = 63.0 (4.6), AG100 = 61.4 (4.3), AG95 = 60.7 (4.8), AG90 = 61.4 (3.3), and AG85 = 62.8 (3.9)). Similar results were found for HRmax, except for AG85 in men and AG100 and AG90 in women, which were lower than CON. Vpeak (km·h) in men was 19.7 (0.9) in CON, which was lower than every other condition: AG100 = 21.0 (1.9) (P < 0.05), AG95 = 21.4 (1.8) (P < 0.01), AG90 = 22.3 (2.1) (P < 0.01), and AG85 = 22.6 (1.6) (P < 0.001). In women, Vpeak (km·h) was similar between CON (17.8 (1.1) ) and AG100 (19.3 (1.0)) but higher at AG95 = 19.5 (0.4) (P < 0.05), AG90 = 19.5 (0.8) (P < 0.05), and AG85 = 21.2 (0.9) (P < 0.01). CONCLUSIONS: The AG can be used at maximal exercise intensities at BW of 85% to 95%, reaching faster running speeds than normally feasible. The AG could be used for overspeed running programs at the highest metabolic response levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discuses current strategies for the development of AIDS vaccines wich allow immunzation to disturb the natural course of HIV at different detailed stages of its life cycle. Mathematical models describing the main biological phenomena (i.e. virus and vaccine induced T4 cell growth; virus and vaccine induced activation latently infected T4 cells; incremental changes immune response as infection progress; antibody dependent enhancement and neutralization of infection) and allowing for different vaccination strategies serve as a backgroud for computer simulations. The mathematical models reproduce updated information on the behavior of immune cells, antibody concentrations and free viruses. The results point to some controversial outcomes of an AIDS vaccine such as an early increase in virus concentration among vaccinated when compared to nonvaccinated individuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In his timely article, Cherniss offers his vision for the future of "Emotional Intelligence" (EI). However, his goal of clarifying the concept by distinguishing definitions from models and his support for "Emotional and Social Competence" (ESC) models will, in our opinion, not make the field advance. To be upfront, we agree that emotions are important for effective decision-making, leadership, performance and the like; however, at this time, EI and ESC have not yet demonstrated incremental validity over and above IQ and personality tests in meta-analyses (Harms & Credé, 2009; Van Rooy & Viswesvaran, 2004). If there is a future for EI, we see it in the ability model of Mayer, Salovey and associates (e.g, Mayer, Caruso, & Salovey, 2000), which detractors and supporters agree holds the most promise (Antonakis, Ashkanasy, & Dasborough, 2009; Zeidner, Roberts, & Matthews, 2008). With their use of quasi-objective scoring measures, the ability model grounds EI in existing frameworks of intelligence, thus differentiating itself from ESC models and their self-rated trait inventories. In fact, we do not see the value of ESC models: They overlap too much with current personality models to offer anything new for science and practice (Zeidner, et al., 2008). In this commentary we raise three concerns we have with Cherniss's suggestions for ESC models: (1) there are important conceptual problems in both the definition of ESC and the distinction of ESC from EI; (2) Cherniss's interpretation of neuroscience findings as supporting the constructs of EI and ESC is outdated, and (3) his interpretation of the famous marshmallow experiment as indicating the existence of ESCs is flawed. Building on the promise of ability models, we conclude by providing suggestions to improve research in EI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquest projecte es tracta de la optimització i la implementació de l’etapa d’adquisició d’un receptor GPS. També inclou una revisió breu del sistema GPS i els seus principis de funcionament. El procés d’adquisició s’ha estudiat amb detall i programat en els entorns de treball Matlab i Simulink. El fet d’implementar aquesta etapa en dos entorns diferents ha estat molt útil tant de cara a l’aprenentatge com també per la comprovació dels resultats obtinguts. El principal objectiu del treball és el disseny d’un model Simulink que es capaç d’adquirir una senyal capturada amb hardware real. En realitat, s’han fet dues implementacions: una que utilitza blocs propis de Simulink i l’altra que utilitza blocs de la llibreria Xilinx. D’aquesta manera, posteriorment, es facilitaria la transició del model a la FPGA utilitzant l’entorn ISE de Xilinx. La implementació de l’etapa d’adquisició es basa en el mètode de cerca de fase de codi en paral·lel, el qual empra la operació correlació creuada mitjançant la transformada ràpida de Fourier (FFT). Per aquest procés es necessari realitzar dues transformades (per a la senyal entrant i el codi de referència) i una antitransformada de Fourier (per al resultat de la correlació). Per tal d’optimitzar el disseny s’utilitza un bloc FFT, ja que tres blocs consumeixen gran part dels recursos d’una FPGA. En lloc de replicar el bloc FFT, en el model el bloc és compartit en el temps gràcies a l’ús de buffers i commutadors, com a resultat la quantitat de recursos requerits per una implementació en una FPGA es podria reduir considerablement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study was performed at OCAS, the Steel Research Centre of ArcelorMittal for the Industry market. The major aim of this research was to obtain an optimized tensile testing methodology with in-situ H-charging to reveal the hydrogen embrittlement in various high strength steels. The second aim of this study has been the mechanical characterization of the hydrogen effect on hight strength carbon steels with varying microstructure, i.e. ferrite-martensite and ferrite-bainite grades. The optimal parameters for H-charging - which influence the tensile test results (sample geometry type of electrolyte, charging methods effect of steel type, etc.) - were defined and applied to Slow Strain Rate testing, Incremental Step Loading and Constant Load Testing. To better understand the initiation and propagation of cracks during tensile testing with in-situ H-charging, and to make the correlation with crystallographic orientation, some materials have been analyzed in the SEM in combination with the EBSD technique. The introduction of a notch on the tensile samples permits to reach a significantly improved reproducibility of the results. Comparing the various steel grades reveals that Dual Phase (ferrite-martensite) steels are more sensitive to hydrogen induced cracking than the FB (ferritic-bainitic) ones. This higher sensitivity to hydrogen was found back in the reduced failure times, increased creep rates and enhanced crack initiation (SEM) for the Dual Phase steels in comparison with the FB steels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El análisis de criterios clásicos de rentabilidad, como la Tasa Interna de Rendimiento o el Cociente Beneficio/Coste, revela que, contra lo que se suponía, concuerdan con el criterio Valor Actual Neto si se aplican correctamente. Lo mismo ocurre con los viejos criterios Valor Final Neto y Anualidad Equivalente y los nuevos Demora Máxima de Beneficios y Plazo de Recuperación de Costes. Se demuestra, además, que para elegir entre dos proyectos mutuamente excluyentes, la aplicación de los criterios citados al proyecto diferencia o incremental es una condición suficiente para que exista concordancia con el criterio Valor Actual Neto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La gestión de recursos en los procesadores multi-core ha ganado importancia con la evolución de las aplicaciones y arquitecturas. Pero esta gestión es muy compleja. Por ejemplo, una misma aplicación paralela ejecutada múltiples veces con los mismos datos de entrada, en un único nodo multi-core, puede tener tiempos de ejecución muy variables. Hay múltiples factores hardware y software que afectan al rendimiento. La forma en que los recursos hardware (cómputo y memoria) se asignan a los procesos o threads, posiblemente de varias aplicaciones que compiten entre sí, es fundamental para determinar este rendimiento. La diferencia entre hacer la asignación de recursos sin conocer la verdadera necesidad de la aplicación, frente a asignación con una meta específica es cada vez mayor. La mejor manera de realizar esta asignación és automáticamente, con una mínima intervención del programador. Es importante destacar, que la forma en que la aplicación se ejecuta en una arquitectura no necesariamente es la más adecuada, y esta situación puede mejorarse a través de la gestión adecuada de los recursos disponibles. Una apropiada gestión de recursos puede ofrecer ventajas tanto al desarrollador de las aplicaciones, como al entorno informático donde ésta se ejecuta, permitiendo un mayor número de aplicaciones en ejecución con la misma cantidad de recursos. Así mismo, esta gestión de recursos no requeriría introducir cambios a la aplicación, o a su estrategia operativa. A fin de proponer políticas para la gestión de los recursos, se analizó el comportamiento de aplicaciones intensivas de cómputo e intensivas de memoria. Este análisis se llevó a cabo a través del estudio de los parámetros de ubicación entre los cores, la necesidad de usar la memoria compartida, el tamaño de la carga de entrada, la distribución de los datos dentro del procesador y la granularidad de trabajo. Nuestro objetivo es identificar cómo estos parámetros influyen en la eficiencia de la ejecución, identificar cuellos de botella y proponer posibles mejoras. Otra propuesta es adaptar las estrategias ya utilizadas por el Scheduler con el fin de obtener mejores resultados.