887 resultados para Evolutionary particle swarm optimizations


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The strut-and-tie models are widely used in certain types of structural elements in reinforced concrete and in regions with complexity of the stress state, called regions D, where the distribution of deformations in the cross section is not linear. This paper introduces a numerical technique to determine the strut-and-tie models using a variant of the classical Evolutionary Structural Optimization, which is called Smooth Evolutionary Structural Optimization. The basic idea of this technique is to identify the numerical flow of stresses generated in the structure, setting out in more technical and rational members of strut-and-tie, and to quantify their value for future structural design. This paper presents an index performance based on the evolutionary topology optimization method for automatically generating optimal strut-and-tie models in reinforced concrete structures with stress constraints. In the proposed approach, the element with the lowest Von Mises stress is calculated for element removal, while a performance index is used to monitor the evolutionary optimization process. Thus, a comparative analysis of the strut-and-tie models for beams is proposed with the presentation of examples from the literature that demonstrates the efficiency of this formulation. © 2013 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Swarm-founding epiponine wasps are an intriguing group of social insects in which colonies are polygynic (several queens share reproduction) and differentiation between castes is often not obvious. However, caste differences in some may be more pronounced in later phases of the colony cycle. Results Using morphometric analyses and multivariate statistics, it was found that caste differences in Metapolybia docilis are slight but more distinct in latter stages of the colony cycle. Conclusions Because differences in body parts are so slight, it is proposed that such variation may be due to differential growth rates of body parts rather than to queens being larger in size, similar to other previously observed epiponines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]This Ph.D. thesis presents a general, robust methodology that may cover any type of 2D acoustic optimization problem. A procedure involving the coupling of Boundary Elements (BE) and Evolutionary Algorithms is proposed for systematic geometric modifications of road barriers that lead to designs with ever-increasing screening performance. Numerical simulations involving single- and multi-objective optimizations of noise barriers of varied nature are included in this document. results disclosed justify the implementation of this methodology by leading to optimal solutions of previously defined topologies that, in general, greatly outperform the acoustic efficiency of classical, widely used barrier designs normally erected near roads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present ARGoS, a novel open source multi-robot simulator. The main design focus of ARGoS is the real-time simulation of large heterogeneous swarms of robots. Existing robot simulators obtain scalability by imposing limitations on their extensibility and on the accuracy of the robot models. By contrast, in ARGoS we pursue a deeply modular approach that allows the user both to easily add custom features and to allocate computational resources where needed by the experiment. A unique feature of ARGoS is the possibility to use multiple physics engines of different types and to assign them to different parts of the environment. Robots can migrate from one engine to another transparently. This feature enables entirely novel classes of optimizations to improve scalability and paves the way for a new approach to parallelism in robotics simulation. Results show that ARGoS can simulate about 10,000 simple wheeled robots 40% faster than real-time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanisms involved in the integration of proteins into the thylakoid membrane are largely unknown. However, many of the steps of this process for the light-harvesting chlorophyll a/b protein (LHCP) have been described and reconstituted in vitro. LHCP is synthesized as a precursor in the cytosol and posttranslationally imported into chloroplasts. Upon translocation across the envelope membranes, the N-terminal transit peptide is cleaved, and the apoprotein is assembled into a soluble "transit complex" and then integrated into the thylakoid membrane via three transmembrane helices. Here we show that 54CP, a chloroplast homologue of the 54-kDa subunit of the mammalian signal recognition particle (SRP54), is essential for transit complex formation, is present in the complex, and is required for LHCP integration into the thylakoid membrane. Our data indicate that 54CP functions posttranslationally as a molecular chaperone and potentially pilots LHCP to the thylakoids. These results demonstrate that one of several pathways for protein routing to the thylakoids is homologous to the SRP pathway and point to a common evolutionary origin for the protein transport systems of the endoplasmic reticulum and the thylakoid membrane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 17 month record of vertical particle flux of dry weight, carbonate and organic carbon were 25.8, 9.4 and 2.4g/m**2/y, respectively. Parallel to trap deployments, pelagic system structure was recorded with high vertical and temporal resolution. Within a distinct seasonal cycle of vertical particle flux, zooplankton faecal pellets of various sizes, shapes and contents were collected by the traps in different proportions and quantities throughout the year (range: 0-4,500 10**3/m**2/d). The remains of different groups of organisms showed distinct seasonal variations in abundance. In early summer there was a small maximum in the diatom flux and this was followed by pulses of tinntinids, radiolarians, foraminiferans and pteropods between July and November. Food web interactions in the water column were important in controlling the quality and quantity of sinking materials. For example, changes in the population structure of dominant herbivores, the break-down of regenerating summer populations of microflagellates and protozooplankton and the collapse of a pteropod dominated community, each resulted in marked sedimentation pulses. These data from the Norwegian Sea indicate those mechanisms which either accelerate or counteract loss of material via sedimentation. These involve variations in the structure of the pelagic system and they operatè on long (e.g. annual plankton succession) and short (e.g. the end of new production, sporadic grazing of swarm feeders) time scales. Connecting investigation of the water column with a high resolution in time in parallel with drifting sediment trap deployments and shipboard experiments with the dominant zooplankters is a promising approach for giving a better understanding of both the origin and the fate of material sinking to the sea floor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. We determine the age and mass of the three best solar twin candidates in open cluster M 67 through lithium evolutionary models. Methods. We computed a grid of evolutionary models with non-standard mixing at metallicity [Fe/H] = 0.01 with the Toulouse-Geneva evolution code for a range of stellar masses. We estimated the mass and age of 10 solar analogs belonging to the open cluster M 67. We made a detailed study of the three solar twins of the sample, YPB637, YPB1194, and YPB1787. Results. We obtained a very accurate estimation of the mass of our solar analogs in M 67 by interpolating in the grid of evolutionary models. The three solar twins allowed us to estimate the age of the open cluster, which is 3.87(-0.66)(+0.55) Gyr, which is better constrained than former estimates. Conclusions. Our results show that the 3 solar twin candidates have one solar mass within the errors and that M 67 has a solar age within the errors, validating its use as a solar proxy. M 67 is an important cluster when searching for solar twins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Compact groups of galaxies are entities that have high densities of galaxies and serve as laboratories to study galaxy interactions, intergalactic star formation and galaxy evolution. Aims. The main goal of this study is to search for young objects in the intragroup medium of seven compact groups of galaxies: HCG 2, 7, 22, 23, 92, 100 and NGC 92 as well as to evaluate the stage of interaction of each group. Methods. We used Fabry-Perot velocity fields and rotation curves together with GALEX NUV and FUV images and optical R-band and HI maps. Results. (i) HCG 7 and HCG 23 are in early stages of interaction; (ii) HCG 2 and HCG 22 are mildly interacting; and (iii) HCG 92, HCG 100 and NGC 92 are in late stages of evolution. We find that all three evolved groups contain populations of young blue objects in the intragroup medium, consistent with ages < 100 Myr, of which several are younger than < 10 Myr. We also report the discovery of a tidal dwarf galaxy candidate in the tail of NGC 92. These three groups, besides containing galaxies that have peculiar velocity fields, also show extended HI tails. Conclusions. Our results indicate that the advanced stage of evolution of a group, together with the presence of intragroup HI clouds, may lead to star formation in the intragroup medium. A table containing all intergalactic HII regions and tidal dwarf galaxies confirmed to date is appended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Tight binaries discovered in young, nearby associations are ideal targets for providing dynamical mass measurements to test the physics of evolutionary models at young ages and very low masses. Aims. We report the binarity of TWA22 for the first time. We aim at monitoring the orbit of this young and tight system to determine its total dynamical mass using an accurate distance determination. We also intend to characterize the physical properties (luminosity, effective temperature, and surface gravity) of each component based on near-infrared photometric and spectroscopic observations. Methods. We used the adaptive-optics assisted imager NACO to resolve the components, to monitor the complete orbit and to obtain the relative near-infrared photometry of TWA22 AB. The adaptive-optics assisted integral field spectrometer SINFONI was also used to obtain medium-resolution (R(lambda) = 1500-2000) spectra in JHK bands. Comparison with empirical and synthetic librairies were necessary for deriving the spectral type, the effective temperature, and the surface gravity for each component of the system. Results. Based on an accurate trigonometric distance (17.5 +/- 0.2 pc) determination, we infer a total dynamical mass of 220 +/- 21 M(Jup) for the system. From the complete set of spectra, we find an effective temperature T(eff) = 2900(-200)(+200) K for TWA22A and T(eff) = 2900(-100)(+200) for TWA22 B and surface gravities between 4.0 and 5.5 dex. From our photometry and an M6 +/- 1 spectral type for both components, we find luminosities of log(L/L(circle dot)) = -2.11 +/- 0.13 dex and log(L/L(circle dot)) = -2.30 +/- 0.16 dex for TWA22 A and B, respectively. By comparing these parameters with evolutionary models, we question the age and the multiplicity of this system. We also discuss a possible underestimation of the mass predicted by evolutionary models for young stars close to the substellar boundary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Previous analyses of lithium abundances in main sequence and red giant stars have revealed the action of mixing mechanisms other than convection in stellar interiors. Beryllium abundances in stars with Li abundance determinations can offer valuable complementary information on the nature of these mechanisms. Aims. Our aim is to derive Be abundances along the whole evolutionary sequence of an open cluster. We focus on the well-studied open cluster IC 4651. These Be abundances are used with previously determined Li abundances, in the same sample stars, to investigate the mixing mechanisms in a range of stellar masses and evolutionary stages. Methods. Atmospheric parameters were adopted from a previous abundance analysis by the same authors. New Be abundances have been determined from high-resolution, high signal-to-noise UVES spectra using spectrum synthesis and model atmospheres. The careful synthetic modeling of the Be lines region is used to calculate reliable abundances in rapidly rotating stars. The observed behavior of Be and Li is compared to theoretical predictions from stellar models including rotation-induced mixing, internal gravity waves, atomic diffusion, and thermohaline mixing. Results. Beryllium is detected in all the main sequence and turn-off sample stars, both slow- and fast-rotating stars, including the Li-dip stars, but is not detected in the red giants. Confirming previous results, we find that the Li dip is also a Be dip, although the depletion of Be is more modest than for Li in the corresponding effective temperature range. For post-main-sequence stars, the Be dilution starts earlier within the Hertzsprung gap than expected from classical predictions, as does the Li dilution. A clear dispersion in the Be abundances is also observed. Theoretical stellar models including the hydrodynamical transport processes mentioned above are able to reproduce all the observed features well. These results show a good theoretical understanding of the Li and Be behavior along the color-magnitude diagram of this intermediate-age cluster for stars more massive than 1.2 M(circle dot).