913 resultados para Variable design parameters
Resumo:
La construction des biosystèmes d’oxydation passive du méthane (BOPM) est une option économique et durable pour réduire les émissions de méthane des sites d’enfouissement de déchets et des effets subséquents du réchauffement climatique. Les BOPM sont constitués de deux couches principales: la couche d'oxydation du méthane (MOL) et la couche de distribution du gaz (GDL). L'oxydation du méthane se produit dans la MOL par les réactions biochimiques des bactéries méthanotrophes, et la GDL est construite sous la MOL pour intercepter et distribuer les émissions fugitives de biogaz à la base de la MOL. Fondamentalement, l'efficacité d'un BOPM est définie en fonction de l'efficacité d'oxydation du méthane dans la MOL. Par conséquent, il est indispensable de fournir des conditions adéquates pour les activités bactériennes des méthanotrophes. En plus des paramètres environnementaux, l'intensité et la distribution du biogaz influencent l'efficacité des BOPM, et ils peuvent rendre le matériau de la MOL - avec une grande capacité d'accueillir les activités bactériennes - inutilisables en termes d'oxydation du méthane sur place. L'effet de barrière capillaire le long de l'interface entre la GDL et la MOL peut provoquer des émissions localisées de méthane, due à la restriction ou la distribution non uniforme de l’écoulement ascendant du biogaz à la base de la MOL. L'objectif principal de cette étude est d'incorporer le comportement hydraulique non saturé des BOPM dans la conception des BOPM, afin d’assurer la facilité et la distribution adéquates de l'écoulement du biogaz à la base de la MOL. Les fonctions de perméabilité à l'air des matériaux utilisés pour construire la MOL des BOPM expérimentaux au site d’enfouissement des déchets de St Nicéphore (Québec, Canada), ainsi que celles d'autres de la littérature technique, ont été étudiés pour évaluer le comportement d'écoulement non saturé du gaz dans les matériaux et pour identifier le seuil de migration sans restriction du gaz. Ce dernier seuil a été introduit en tant que un paramètre de conception avec lequel le critère de conception recommandé ici, c’est à dire la longueur de la migration sans restriction de gaz (LMSG), a été défini. La LMSG est considérée comme la longueur le long de l'interface entre la GDL et la MOL où le biogaz peut migrer à travers la MOL sans restriction. En réalisant des simulations numériques avec SEEP/W, les effets de la pente de l'interface, des paramètres définissant la courbe de rétention d'eau, de la fonction de la conductivité hydraulique du matériau de la MOL sur la valeur de la LMSG (représentant la facilité d'écoulement du biogaz à l'interface) et de la distribution de l'humidité (et par conséquent celle du biogaz) ont été évalués. Selon les résultats des simulations, la conductivité hydraulique saturée et la distribution des tailles de pores du matériau de la MOL sont les paramètres les plus importants sur la distribution de l'humidité le long de l'interface. Ce dernier paramètre influe également sur la valeur du degré de saturation et donc la facilité du biogaz à la base de la MOL. La densité sèche du matériau de MOL est un autre paramètre qui contrôle la facilité d'écoulement ascendant du biogaz. Les limitations principales de la présente étude sont associées au nombre de matériaux de MOL testés et à l'incapacité de SEEP/W de considérer l'évapotranspiration. Toutefois, compte tenu des hypothèses raisonnables dans les simulations et en utilisant les données de la littérature, on a essayé de réduire ces limitations. En utilisant les résultats des expériences et des simulations numériques, des étapes et des considérations de conception pour la sélection du matériau de MOL et de la pente d'interface ont été proposées. En effet,le comportement hydraulique non saturé des matériaux serait intégré dans les nécessités de conception pour un BOPM efficace, de sorte que la capacité maximale possible d'oxydation du méthane du matériau de la MOL soit exploitée.
Resumo:
Crystallization is employed in different industrial processes. The method and operation can differ depending on the nature of the substances involved. The aim of this study is to examine the effect of various operating conditions on the crystal properties in a chemical engineering design window with a focus on ultrasound assisted cooling crystallization. Batch to batch variations, minimal manufacturing steps and faster production times are factors which continuous crystallization seeks to resolve. Continuous processes scale-up is considered straightforward compared to batch processes owing to increase of processing time in the specific reactor. In cooling crystallization process, ultrasound can be used to control the crystal properties. Different model compounds were used to define the suitable process parameters for the modular crystallizer using equal operating conditions in each module. A final temperature of 20oC was employed in all experiments while the operating conditions differed. The studied process parameters and configuration of the crystallizer were manipulated to achieve a continuous operation without crystal clogging along the crystallization path. The results from the continuous experiment were compared with the batch crystallization results and analysed using the Malvern Morphologi G3 instrument to determine the crystal morphology and CSD. The modular crystallizer was operated successfully with three different residence times. At optimal process conditions, a longer residence time gives smaller crystals and narrower CSD. Based on the findings, at a constant initial solution concentration, the residence time had clear influence on crystal properties. The equal supersaturation criterion in each module offered better results compared to other cooling profiles. The combination of continuous crystallization and ultrasound has large potential to overcome clogging, obtain reproducible and narrow CSD, specific crystal morphologies and uniform particle sizes, and exclusion of milling stages in comparison to batch processes.
Resumo:
Objetivou-se avaliar o efeito da inclusão de aditivos na ensilagem de cana-de-açúcar (Saccharum officinarum L.) sobre a degradação de MS e de componentes da parede celular e sobre os parâmetros de fermentação ruminal em bovinos alimentados com dietas contendo essas silagens. Utilizaram-se cinco novilhos da raça Nelore providos de cânula ruminal, alocados em delineamento quadrado latino 5 ´ 5 e alimentados com dietas com 65% de volumoso (%) MS. Foram avaliadas cinco silagens (base úmida): controle - cana-de-açúcar, sem aditivos; uréia - cana-de-açúcar + 0,5% ureia; benzoato - cana-de-açúcar + 0,1% de benzoato de sódio; LP - cana-de-açúcar inoculada com Lactobacillus plantarum (1 ´ 10(6) ufc/g MV); LB - cana-de-açúcar inoculada com L. buchneri (3,6 ´ 10(5) ufc/g forragem). A forragem foi armazenada em silos do tipo poço por 90 dias antes do fornecimento aos animais. Os parâmetros ruminais foram afetados de forma moderada pelas silagens e tiveram forte efeito do horário de coleta de amostras. As concentrações molares médias dos ácidos acético, propiônico e butírico foram de 60,9; 19,3 e 10,2 mM, respectivamente. O ambiente ruminal proporcionado por dietas formuladas com silagens de cana-de-açúcar foi satisfatório e similar ao tradicionalmente observado em dietas contendo cana. O uso de aditivos na ensilagem influenciou, de forma não-significativa, a degradabilidade ruminal da MS e da MO, mas não alterou a degradabilidade ruminal da fração fibrosa. Os aditivos aplicados à cana-de-açúcar resultaram em pequenas alterações na maior parte das variáveis avaliadas. Apesar de a degradabilidade ruminal das silagens ter sido pouco afetada pelo uso de aditivos, os valores observados foram próximos aos observados para a cana-de-açúcar in natura.
Resumo:
Cells adapt to their changing world by sensing environmental cues and responding appropriately. This is made possible by complex cascades of biochemical signals that originate at the cell membrane. In the last decade it has become apparent that the origin of these signals can also arise from physical cues in the environment. Our motivation is to investigate the role of physical factors in the cellular response of the B lymphocyte. B cells patrol the body for signs of invading pathogens in the form of antigen on the surface of antigen presenting cells. Binding of antigen with surface proteins initiates biochemical signaling essential to the immune response. Once contact is made, the B cell spreads on the surface of the antigen presenting cell in order to gather as much antigen as possible. The physical mechanisms that govern this process are unexplored. In this research, we examine the role of the physical parameters of antigen mobility and cell surface topography on B cell spreading and activation. Both physical parameters are biologically relevant as immunogens for vaccine design, which can provide laterally mobile and immobile antigens and topographical surfaces. Another physical parameter that influences B cell response and the formation of the cell-cell junction is surface topography. This is biologically relevant as antigen presenting cells have highly convoluted membranes, resulting in variable topography. We found that B cell activation required the formation of antigen-receptor clusters and their translocation within the attachment plane. We showed that cells which failed to achieve these mobile clusters due to prohibited ligand mobility were much less activation competent. To investigate the effect of topography, we use nano- and micro-patterned substrates, on which B cells were allowed to spread and become activated. We found that B cell spreading, actin dynamics, B cell receptor distribution and calcium signaling are dependent on the topographical patterning of the substrate. A quantitative understanding of cellular response to physical parameters is essential to uncover the fundamental mechanisms that drive B cell activation. The results of this research are highly applicable to the field of vaccine development and therapies for autoimmune diseases. Our studies of the physical aspects of lymphocyte activation will reveal the role these factors play in immunity, thus enabling their optimization for biological function and potentially enabling the production of more effective vaccines.
Resumo:
Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.
Resumo:
Design aspects of a novel beam-reconfigurable pla-nar series-fed array are addressed to achieve beam steering with frequency tunability over a relatively broad bandwidth. The design is possible thanks to the use of the complementary strip-slot, which is an innovative broadly matched microstrip radiator, and the careful selection of the phase shifter parameters.
Resumo:
Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.
Resumo:
This thesis details the design and applications of a terahertz (THz) frequency comb spectrometer. The spectrometer employs two offset locked Ti:Sapphire femtosecond oscillators with repetition rates of approximately 80 MHz, offset locked at 100 Hz to continuously sample a time delay of 12.5 ns at a maximum time delay resolution of 15.6 fs. These oscillators emit continuous pulse trains, allowing the generation of a THz pulse train by the master, or pump, oscillator and the sampling of this THz pulse train by the slave, or probe, oscillator via the electro-optic effect. Collecting a train of 16 consecutive THz pulses and taking the Fourier transform of this pulse train produces a decade-spanning frequency comb, from 0.25 to 2.5 THz, with a comb tooth width of 5 MHz and a comb tooth spacing of ~80 MHz. This frequency comb is suitable for Doppler-limited rotational spectroscopy of small molecules. Here, the data from 68 individual scans at slightly different pump oscillator repetition rates were combined, producing an interleaved THz frequency comb spectrum, with a maximum interval between comb teeth of 1.4 MHz, enabling THz frequency comb spectroscopy.
The accuracy of the THz frequency comb spectrometer was tested, achieving a root mean square error of 92 kHz measuring selected absorption center frequencies of water vapor at 10 mTorr, and a root mean square error of 150 kHz in measurements of a K-stack of acetonitrile. This accuracy is sufficient for fitting of measured transitions to a model Hamiltonian to generate a predicted spectrum for molecules of interest in the fields of astronomy and physical chemistry. As such, the rotational spectra of methanol and methanol-OD were acquired by the spectrometer. Absorptions from 1.3 THz to 2.0 THz were compared to JPL catalog data for methanol and the spectrometer achieved an RMS error of 402 kHz, improving to 303 kHz when excluding low signal-to-noise absorptions. This level of accuracy compares favorably with the ~100 kHz accuracy achieved by JPL frequency multiplier submillimeter spectrometers. Additionally, the relative intensity performance of the THz frequency comb spectrometer is linear across the entire decade-spanning bandwidth, making it the preferred instrument for recovering lineshapes and taking absolute intensity measurements in the THz region. The data acquired by the Terahertz Frequency Comb Spectrometer for methanol-OD is of comparable accuracy to the methanol data and may be used to refine the fit parameters for the predicted spectrum of methanol-OD.
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
Resumo:
Career Academy instructors’ technical literacy is vital to the academic success of students. This nonexperimental ex post facto study examined the relationships between the level of technical literacy of instructors in career academies and student academic performance. It was also undertaken to explore the relationship between the pedagogical training of instructors and the academic performance of students. Out of a heterogeneous population of 564 teachers in six targeted schools, 136 teachers (26.0 %) responded to an online survey. The survey was designed to gather demographic and teaching experience data. Each demographic item was linked by researchers to teachers’ technology use in the classroom. Student achievement was measured by student learning gains as assessed by the reading section of the FCAT from the previous to the present school year. Linear and hierarchical regressions were conducted to examine the research questions. To clarify the possibility of teacher gender and teacher race/ethnic group differences by research variable, a series of one-way ANOVAs were conducted. As revealed by the ANOVA results, there were not statistically significant group differences in any of the research variables by teacher gender or teacher race/ethnicity. Greater student learning gains were associated with greater teacher technical expertise integrating computers and technology into the classroom, even after controlling for teacher attitude towards computers. Neither teacher attitude toward technology integration nor years of experience in integrating computers into the curriculum significantly predicted student learning gains in the regression models. Implications for HRD theory, research, and practice suggest that identifying teacher levels of technical literacy may help improve student academic performance by facilitating professional development strategies and new parameters for defining highly qualified instructors with 21st century skills. District professional development programs can benefit by increasing their offerings to include more computer and information communication technology courses. Teacher preparation programs can benefit by including technical literacy as part of their curriculum. State certification requirements could be expanded to include formal surveys to assess teacher use of technology.
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.
Resumo:
Actualmente encontramos una fuerte presión en las organizaciones por adaptarse a un mundo competitivo con un descenso en las utilidades y una incertidumbre constante en su flujo de caja. Estas circunstancias obligan a las organizaciones al mejoramiento continuo buscando nuevas formas de gestionar sus procesos y sus recursos. Para las organizaciones de prestación de servicios en el sector de telecomunicaciones una de las ventajas competitivas más importantes de obtener es la productividad debido a que sus ganancias dependen directamente del número de actividades que puedan ejecutar cada empleado. El reto es hacer más con menos y con mejor calidad. Para lograrlo, la necesidad de gestionar efectivamente los recursos humanos aparece, y aquí es donde los sistemas de compensación toman un rol importante. El objetivo en este trabajo es diseñar y aplicar un modelo de remuneración variable para una empresa de prestación de servicios profesionales en el sector de las telecomunicaciones y con esto aportar al estudio de la gestión del desempeño y del talento humano en Colombia. Su realización permitió la documentación del diseño y aplicación del modelo de remuneración variable en un proyecto del sector de telecomunicaciones en Colombia. Su diseño utilizó las tendencias de programas remunerativos y teorías de gestión de desempeño para lograr un modelo integral que permita el crecimiento sostenido en el largo plazo y la motivación al recurso más importante de la organización que es el talento humano. Su aplicación permitió también la documentación de problemas y aciertos en la implementación de estos modelos.
Resumo:
The aim of this study was to estimate genetic parameters to support the selection of bacuri progenies for a first cycle of recurrent selection, using the REML/BLUP (restricted maximum likelihood/best linear unbiased prediction) procedure to estimate the variance components and genotypic values. Were evaluated twelve variables in a total of 210 fruits from 39 different seed trees, from a field trial with an experimental design of incomplete blocks with clonal replies among subplots. The three variables related with the fruit development (weight, diameter, length) showed strong correlation, and where fruit length showed higher heritability and potential to be used for indirect selection. Among the 39 progenies evaluated in this study, five present potential to compose the next cycle of recurrent selection, due they hold good selection differential either to agrotechnological variables as to development of bacuri fruit.