835 resultados para sampling methodology
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.
Resumo:
The aim of this study was to compare two methods of tear sampling for protein quantification. Tear samples were collected from 29 healthy dogs (58 eyes) using Schirmer tear test (STT) strip and microcapillary tubes. The samples were frozen at -80ºC and analyzed by the Bradford method. Results were analyzed by Student's t test. The average protein concentration and standard deviation from tears collected with microcapillary tube were 4.45mg/mL ±0.35 and 4,52mg/mL ±0.29 for right and left eyes respectively. The average protein concentration and standard deviation from tears collected with Schirmer Tear Test (STT) strip were and 54.5mg/mL ±0.63 and 54.15mg/mL ±0.65 to right and left eyes respectively. Statistically significant differences (p<0.001) were found between the methods. In the conditions in which this study was conducted, the average protein concentration obtained with the Bradford test from tear samples obtained by Schirmer Tear Test (STT) strip showed values higher than those obtained with microcapillary tube. It is important that concentration of tear protein pattern values should be analyzed according the method used to collect tear samples.
Resumo:
This work describes the methodology, basic procedures and instrumental employed by the Solar Energy Laboratory at Universidade Federal do Rio Grande do Sul for the determination of current-voltage characteristic curves of photovoltaic modules. According to this methodology, I-V characteristic curves were acquired for several modules under diverse conditions. The main electrical parameters were determined and the temperature and irradiance influence on photovoltaic modules performance was quantified. It was observed that most of the tested modules presented output power values considerably lower than those specified by the manufacturers. The described hardware allows the testing of modules with open-circuit voltage up to 50 V and short-circuit current up to 8 A.
Resumo:
This work presents recent results concerning a design methodology used to estimate the positioning deviation for a gantry (Cartesian) manipulator, related mainly to structural elastic deformation of components during operational conditions. The case-study manipulator is classified as gantry type and its basic dimensions are 1,53m x 0,97m x 1,38m. The dimensions used for the calculation of effective workspace due to end-effector path displacement are: 1m x 0,5m x 0,5m. The manipulator is composed by four basic modules defined as module X, module Y, module Z and terminal arm, where is connected the end-effector. Each module controlled axis performs a linear-parabolic positioning movement. The planning path algorithm has the maximum velocity and the total distance as input parameters for a given task. The acceleration and deceleration times are the same. Denavit-Hartemberg parameterization method is used in the manipulator kinematics model. The gantry manipulator can be modeled as four rigid bodies with three degrees-of-freedom in translational movements, connected as an open kinematics chain. Dynamic analysis were performed considering inertial parameters specification such as component mass, inertia and center of gravity position of each module. These parameters are essential for a correct manipulator dynamic modelling, due to multiple possibilities of motion and manipulation of objects with different masses. The dynamic analysis consists of a mathematical modelling of the static and dynamic interactions among the modules. The computation of the structural deformations uses the finite element method (FEM).
Resumo:
This work presents a methodology for the development of Teleoperated Robotic Systems through the Internet. Initially, it is presented a bibliographical review of the Telerobotic systems that uses Internet as way of control. The methodology is implemented and tested through the development of two systems. The first is a manipulator with two degrees of freedom commanded remotely through the Internet denominated RobWebCam (http://www.graco.unb.br/robwebcam). The second is a system which teleoperates an ABB (Asea Brown Boveri) Industrial Robot of six degrees of freedom denominated RobWebLink (http://webrobot.graco.unb.br). RobWebCam is composed of a manipulator with two degrees of freedom, a video camera, Internet, computers and communication driver between the manipulator and the Unix system; and RobWebLink composed of the same components plus the Industrial Robot. With the use of this technology, it is possible to move far distant positioning objects minimizing transport costs, materials and people; acting in real time in the process that is wanted to be controller. This work demonstrates that the teleoperating via Internet of robotic systems and other equipments is viable, in spite of using rate transmission data with low bandwidth. Possible applications include remote surveillance, control and remote diagnosis and maintenance of machines and equipments.
Resumo:
Industrial applications demand that robots operate in agreement with the position and orientation of their end effector. It is necessary to solve the kinematics inverse problem. This allows the displacement of the joints of the manipulator to be determined, to accomplish a given objective. Complete studies of dynamical control of joint robotics are also necessary. Initially, this article focuses on the implementation of numerical algorithms for the solution of the kinematics inverse problem and the modeling and simulation of dynamic systems. This is done using real time implementation. The modeling and simulation of dynamic systems are performed emphasizing off-line programming. In sequence, a complete study of the control strategies is carried out through the study of several elements of a robotic joint, such as: DC motor, inertia, and gearbox. Finally a trajectory generator, used as input for a generic group of joints, is developed and a proposal of the controller's implementation of joints, using EPLD development system, is presented.
Resumo:
Nowadays, the upwind three bladed horizontal axis wind turbine is the leading player on the market. It has been found to be the best industrial compromise in the range of different turbine constructions. The current wind industry innovation is conducted in the development of individual turbine components. The blade constitutes 20-25% of the overall turbine budget. Its optimal operation in particular local economic and wind conditions is worth investigating. The blade geometry, namely the chord, twist and airfoil type distributions along the span, responds to the output measures of the blade performance. Therefore, the optimal wind blade geometry can improve the overall turbine performance. The objectives of the dissertation are focused on the development of a methodology and specific tool for the investigation of possible existing wind blade geometry adjustments. The novelty of the methodology presented in the thesis is the multiobjective perspective on wind blade geometry optimization, particularly taking simultaneously into account the local wind conditions and the issue of aerodynamic noise emissions. The presented optimization objective approach has not been investigated previously for the implementation in wind blade design. The possibilities to use different theories for the analysis and search procedures are investigated and sufficient arguments derived for the usage of proposed theories. The tool is used for the test optimization of a particular wind turbine blade. The sensitivity analysis shows the dependence of the outputs on the provided inputs, as well as its relative and absolute divergences and instabilities. The pros and cons of the proposed technique are seen from the practical implementation, which is documented in the results, analysis and conclusion sections.
Resumo:
Julkaisumaa: 158 TW TWN Taiwan
Resumo:
More discussion is required on how and which types of biomass should be used to achieve a significant reduction in the carbon load released into the atmosphere in the short term. The energy sector is one of the largest greenhouse gas (GHG) emitters and thus its role in climate change mitigation is important. Replacing fossil fuels with biomass has been a simple way to reduce carbon emissions because the carbon bonded to biomass is considered as carbon neutral. With this in mind, this thesis has the following objectives: (1) to study the significance of the different GHG emission sources related to energy production from peat and biomass, (2) to explore opportunities to develop more climate friendly biomass energy options and (3) to discuss the importance of biogenic emissions of biomass systems. The discussion on biogenic carbon and other GHG emissions comprises four case studies of which two consider peat utilization, one forest biomass and one cultivated biomasses. Various different biomass types (peat, pine logs and forest residues, palm oil, rapeseed oil and jatropha oil) are used as examples to demonstrate the importance of biogenic carbon to life cycle GHG emissions. The biogenic carbon emissions of biomass are defined as the difference in the carbon stock between the utilization and the non-utilization scenarios of biomass. Forestry-drained peatlands were studied by using the high emission values of the peatland types in question to discuss the emission reduction potential of the peatlands. The results are presented in terms of global warming potential (GWP) values. Based on the results, the climate impact of the peat production can be reduced by selecting high-emission-level peatlands for peat production. The comparison of the two different types of forest biomass in integrated ethanol production in pulp mill shows that the type of forest biomass impacts the biogenic carbon emissions of biofuel production. The assessment of cultivated biomasses demonstrates that several selections made in the production chain significantly affect the GHG emissions of biofuels. The emissions caused by biofuel can exceed the emissions from fossil-based fuels in the short term if biomass is in part consumed in the process itself and does not end up in the final product. Including biogenic carbon and other land use carbon emissions into the carbon footprint calculations of biofuel reveals the importance of the time frame and of the efficiency of biomass carbon content utilization. As regards the climate impact of biomass energy use, the net impact on carbon stocks (in organic matter of soils and biomass), compared to the impact of the replaced energy source, is the key issue. Promoting renewable biomass regardless of biogenic GHG emissions can increase GHG emissions in the short term and also possibly in the long term.
Resumo:
The suitability of quantitative variables for phenological studies was evaluated in a population of the brown seaweed Sargassum vulgare from "Praia das Gordas", Angra dos Reis, Ilha Grande Bay, state of Rio de Janeiro. From June 1998 to May 1999, twenty adult individuals were randomly sampled at bimonthly intervals. Fifteen variables related to the vegetative and reproductive development of perennial and non-perennial parts of the individuals were quantified. Variables related to the non-perennial parts were more useful than those related to the perennial parts, because they showed a clear variation over the year. Vegetative development declined from June to October, and increased from October to February, when maximum median values of thallus height, total dry mass, non-perennial parts dry mass, and degree of branching were reached. This pattern coincided with those described for other species of the genus from warm temperate regions. Thallus height, a usually employed character in other phenological studies of Sargassum, showed lower coefficient of variation (53.2%) than those related to dry mass (72.0% to 182.3%). Peak of reproduction occurred from June to August, according to the following variables: fertile primary lateral branches number and dry mass and receptacles dry mass. Non-perennial parts dry mass and receptacles dry mass are recommended for phenological studies of S. vulgare. This methodological procedure avoids the sampling of the whole individual and warrants its regeneration from the perennial parts.
Resumo:
Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS) models for estimating the area under the plasma concentration versus time curve (AUC) and the peak plasma concentration (Cmax) of 4-methylaminoantipyrine (MAA), an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336), measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias <1.5%, precision between 3.1 and 8.3%) by LSS models based on two sampling times. Validation tests indicate that the most informative 2-point LSS models developed for one formulation provide good estimates (R²>0.85) of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h), but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4%) as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%). Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.
Resumo:
In this work the separation of multicomponent mixtures in counter-current columns with supercritical carbon dioxide has been investigated using a process design methodology. First the separation task must be defined, then phase equilibria experiments are carried out, and the data obtained are correlated with thermodynamic models or empirical functions. Mutual solubilities, Ki-values, and separation factors aij are determined. Based on this data possible operating conditions for further extraction experiments can be determined. Separation analysis using graphical methods are performed to optimize the process parameters. Hydrodynamic experiments are carried out to determine the flow capacity diagram. Extraction experiments in laboratory scale are planned and carried out in order to determine HETP values, to validate the simulation results, and to provide new materials for additional phase equilibria experiments, needed to determine the dependence of separation factors on concetration. Numerical simulation of the separation process and auxiliary systems is carried out to optimize the number of stages, solvent-to-feed ratio, product purity, yield, and energy consumption. Scale-up and cost analysis close the process design. The separation of palmitic acid and (oleic+linoleic) acids from PFAD-Palm Fatty Acids Distillates was used as a case study.
Resumo:
The Graphite furnace atomic absorption spectrometry (GF AAS) was the technique chosen by the inorganic contamination laboratory (INCQ/ FIOCRUZ) to be validated and applied in routine analysis for arsenic detection and quantification. The selectivity, linearity, sensibility, detection, and quantification limits besides accuracy and precision parameters were studied and optimized under Stabilized Temperature Platform Furnace (STPF) conditions. The limit of detection obtained was 0.13 µg.L-1 and the limit of quantification was 1.04 µg.L-1, with an average precision, for total arsenic, less than 15% and an accuracy of 96%. To quantify the chemical species As(III) and As(V), an ion-exchange resin (Dowex 1X8, Cl- form) was used and the physical-chemical parameters were optimized resulting in a recuperation of 98% of As(III) and of 90% of As(V). The method was applied to groundwater, mineral water, and hemodialysis purified water samples. All results obtained were lower than the maximum limit values established by the legal Brazilian regulations, in effect, 50, 10, and 5 µg.L-1 para As total, As(III) e As(V), respectively. All results were statistically evaluated.
Resumo:
Sodium alginate needs the presence of calcium ions to gelify. For this reason, the contribution of the calcium source in a fish muscle mince added by sodium alginate, makes gelification possible, resulting a restructured fish product. The three different calcium sources considered were: Calcium Chloride (CC); Calcium Caseinate (CCa); and Calcium lactate (CLa). Several physical properties were analyzed, including mechanical properties, colour and cooking loss. Response Surface Methodology (RSM) was used to determine the contribution of different calcium sources to a restructured fish muscle. The calcium source that modifies the system the most is CC. A combination of CC and sodium alginate weakened mechanical properties as reflected in the negative linear contribution of sodium alginate. Moreover, CC by itself increased lightness and cooking loss. The mechanical properties of restructured fish muscle elaborated were enhanced by using CCa and sodium alginate, as reflected in the negative linear contribution of sodium alginate. Also, CCa increased cooking loss. The role of CLa combined with sodium alginate was not so pronounced in the system discussed here.
Resumo:
During postharvest, lettuce is usually exposed to adverse conditions (e.g. low relative humidity) that reduce the vegetable quality. In order to evaluate its shelf life, a great number of quality attributes must be analyzed, which requires careful experimental design, and it is time consuming. In this study, the modified Global Stability Index method was applied to estimate the quality of butter lettuce at low relative humidity during storage discriminating three lettuce zones (internal, middle, and external). The results indicated that the most relevant attributes were: the external zone - relative water content, water content , ascorbic acid, and total mesophilic counts; middle zone - relative water content, water content, total chlorophyll, and ascorbic acid; internal zone - relative water content, bound water, water content, and total mesophilic counts. A mathematical model that takes into account the Global Stability Index and overall visual quality for each lettuce zone was proposed. Moreover, the Weibull distribution was applied to estimate the maximum vegetable storage time which was 5, 4, and 3 days for the internal, middle, and external zone, respectively. When analyzing the effect of storage time for each lettuce zone, all the indices evaluated in the external zone of lettuce presented significant differences (p < 0.05). For both, internal and middle zones, the attributes presented significant differences (p < 0.05), except for water content and total chlorophyll.