982 resultados para Applied Computing
Resumo:
The leaders` organizations of several different sectors have as characteristic to measure their own performance in a systematic way. However, this concept is still unusual in agricultural enterprises, including the mechanization sector. Mechanization has an important role on the production costs and to know its performance is a key factor for the agricultural enterprise success. This work was generated by the importance that measurement of performance has for the management and the mechanization impact on the production costs. Its aim is to propose an integrated performance measurement system to give support to agricultural management. The methodology was divided in two steps: adjustment of a conceptual model based on Balanced Score Card - BSC; application of the model in a study case at sugar cane mill. The adjustment and the application of the conceptual model allowed to obtain the performance index in a systematic way, that are associated to: costs and deadline ( traditionally used); control and improvement on the quality of operations and support process; environmental preservation; safety; health; employees satisfaction; development of information systems. The adjusted model helped the development of the performance measurement system for the mechanized management systems and the index allows an integrated view of the enterprise, related to its strategic objectives.
Resumo:
The mechanisms involved in the control of growth in chickens are too complex to be explained only under univariate analysis because all related traits are biologically correlated. Therefore, we evaluated broiler chicken performance under a multivariate approach, using the canonical discriminant analysis. A total of 1920 chicks from eight treatments, defined as the combination of four broiler chicken strains (Arbor Acres, AgRoss 308, Cobb 500 and RX) from both sexes, were housed in 48 pens. Average feed intake, average live weight, feed conversion and carcass, breast and leg weights were obtained for days 1 to 42. Canonical discriminant analysis was implemented by SAS((R)) CANDISC procedure and differences between treatments were obtained by the F-test (P < 0.05) over the squared Mahalanobis` distances. Multivariate performance from all treatments could be easily visualised because one graph was obtained from two first canonical variables, which explained 96.49% of total variation, using a SAS((R)) CONELIP macro. A clear distinction between sexes was found, where males were better than females. Also between strains, Arbor Acres, AgRoss 308 and Cobb 500 (commercial) were better than RX (experimental), Evaluation of broiler chicken performance was facilitated by the fact that the six original traits were reduced to only two canonical variables. Average live weight and carcass weight (first canonical variable) were the most important traits to discriminate treatments. The contrast between average feed intake and average live weight plus feed conversion (second canonical variable) were used to classify them. We suggest analysing performance data sets using canonical discriminant analysis.
Resumo:
MDF panels from eucalyptus wood fibers were manufactured in laboratory and industrial production and had their apparent density profile determined by X-ray densitometry. The MDF panels apparent density parameters (maximum density of the superior and inferior faces; medium and minimum density) were determined and compared. The results indicated that the density values of the MDF panels made in the laboratory and in industrial line did not show significant statistical differences, indicating the similarities in the pressing phase of the fibers of both kinds. However, for MDF panels of laboratory and production line, the values of maximum, mean and minimum densities showed statistically significant correlations. The determination of the density profile for MDF panels by X-ray densitometry is important for the evaluation of pressing phase and other variables of the industrial process of production, as well for the determination of the technological properties.
Resumo:
The purposes of this work were: (1) to comparatively evaluate the effects of hypromellose viscosity grade and content on ketoprofen release from matrix tablets, using Bio-Dis and the paddle apparatuses, (2) to investigate the influence of the pH of the dissolution medium on drug release. Furthermore, since direct compression had not shown to be appropriate to obtain the matrices under study, it was also an objective (3) to evaluate the impact of granulation on drug release process. Six formulations of ketoprofen matrix tablets were obtained by compression, with or without previous granulation, varying the content and viscosity grade of hypromellose. Dissolution tests were carried out at a fixed pH, in each experiment, with the paddle method (pH 4.5, 6.0, 6.8, or 7.2), while a pH gradient was used in Bio-Dis (pH 1.2 to 7.2). The higher the hypromellose viscosity grade and content were, the lower the amount of ketoprofen released was in both apparatuses, the content effect being more expressive. Drug dissolution enhanced with the increase of the pH of the medium due to its pH-dependent solubility. Granulation caused an increase in drug dissolution and modified the mechanism of the release process.
Resumo:
A partial pseudo-ternary phase diagram has been studied for the cethyltrimethylammonium bromide/isooctane:hexanol:butanol/potassium phosphate buffer system, where the two-phase diagram consisting of the reverse micelle phase (L-2) in equilibrium with the solvent is indicated. Based on these diagrams two-phase systems of reverse micelles were prepared with different compositions of the compounds and used for extraction and recovery of two enzymes, and the percentage of enzyme recovery yield monitored. The enzymes glucose-6-phosphate dehydrogenase (G6PD) and xylose redutase (XR) obtained from Candida guilliermondii yeast were used in the extraction procedures. The recovery yield data indicate that micelles having different composition give selective extraction of enzymes. The method can thus be used to optimize enzyme extraction processes. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The aim of this study was to compare the effects of Low-intensity Laser Therapy (LILT) and Light Emitting Diode Therapy (LEDT) of low intensity on the treatment of lesioned Achilles tendon of rats. The experimental model consisted of a partial mechanical lesion on the right Achilles tendon deep portion of 90 rats. One hour after the lesion, the injured animals received applications of laser/LED (685, 830/630, 880 nm), and the same procedure was repeated at 24-h intervals, for 10 days. The healing process and deposition of collagen were evaluated based on a polarization microscopy analysis of the alignment and organization of collagen bundles, through the birefringence (optical retardation-OR). The results showed a real efficiency of treatments based on LEDT and confirmed that LILT seems to be effective on healing process. Although absence of coherence of LED light, tendon healing treatment with this feature was satisfactory and can certainly replace treatments based on laser light applications. Applications of infrared laser at 830 nm and LED 880 nm were more efficient when the aim is a good organization, aggregation, and alignment of the collagen bundles on tendon healing. However, more research is needed for a safety and more efficient determination of a protocol with LED.
Resumo:
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombination-detecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of post-recombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
Resumo:
The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.