980 resultados para Instrumentation and Applied Physics (Formally ISU)
Resumo:
In the field of observational methodology the observer is obviously a central figure, and close attention should be paid to the process through which he or she acquires, applies, and maintains the skills required. Basic training in how to apply the operational definitions of categories and the rules for coding, coupled with the opportunity to use the observation instrument in real-life situations, can have a positive effect in terms of the degree of agreement achieved when one evaluates intra- and inter-observer reliability. Several authors, including Arias, Argudo, & Alonso (2009) and Medina and Delgado (1999), have put forward proposals for the process of basic and applied training in this context. Reid y De Master (1982) focuses on the observer's performance and how to maintain the acquired skills, it being argued that periodic checks are needed after initial training because an observer may, over time, become less reliable due to the inherent complexity of category systems. The purpose of this subsequent training is to maintain acceptable levels of observer reliability. Various strategies can be used to this end, including providing feedback about those categories associated with a good reliability index, or offering re-training in how to apply those that yield lower indices. The aim of this study is to develop a performance-based index that is capable of assessing an observer's ability to produce reliable observations in conjunction with other observers.
Resumo:
High-temperature liquid chromatography (HTLC) is a technique that presents a series of advantages in liquid phase separations, such as: reduced analysis time, reduced pressure drop, reduced asymmetry factors, modified retentions, controlled selectivities, better efficiencies and improved detectivities, as well as permitting green chromatography. The practical limitations that relate to instrumentation and to stationary phase instability are being resolved and this technique is now ready to be applied for routine determinations.
Resumo:
The simultaneous determination of two or more active components in pharmaceutical preparations, without previous chemical separation, is a common analytical problem. Published works describe the determination of AZT and 3TC separately, as raw material or in different pharmaceutical preparations. In this work, a method using UV spectroscopy and multivariate calibration is described for the simultaneous measurement of 3TC and AZT in fixed dose combinations. The methodology was validated and applied to determine the AZT+3TC contents in tablets from five different manufacturers, as well as their dissolution profile. The results obtained employing the proposed methodology was similar to methods using first derivative technique and HPLC.
Resumo:
Adsorbents functionalized with chelating agents are effective in removal of heavy metals from aqueous solutions. Important properties of such adsorbents are high binding affinity as well as regenerability. In this study, aminopolycarboxylic acid, EDTA and DTPA, were immobilized on the surface of silica gel, chitosan, and their hybrid materials to achieve chelating adsorbents for heavy metals such as Co(II), Ni(II), Cd(II), and Pb(II). New knowledge about the adsorption properties of EDTA- and DTPA-functionalizedadsorbents was obtained. Experimental work showed the effectiveness, regenerability, and stability of the studied adsorbents. Both advantages and disadvantages of the adsorbents were evaluated. For example, the EDTA-functionalized chitosan-silica hybrid materials combined the benefits of the silica gel and chitosan while at the same time diminishing their observed drawbacks. Modeling of adsorption kinetics and isotherms is an important step in design process. Therefore, several kinetic and isotherm models were introduced and applied in this work. Important aspects such as effect of error function, data range, initial guess values, and linearization were discussed and investigated. The selection of the most suitable model was conducted by comparing the experimental and simulated data as well as evaluating the correspondence between the theory behind the model and properties of the adsorbent. In addition, modeling of two-component data was conducted using various extended isotherms. Modeling results for both one- and twocomponent systems supported each other. Finally, application testing of EDTA- and DTPA-functionalized adsorbents was conducted. The most important result was the applicability of DTPA-functionalized silica gel and chitosan in the capturing of Co(II) from its aqueous EDTA-chelate. Moreover, these adsorbents were efficient in various solution matrices. In addition, separation of Ni(II) from Co(II) and Ni(II) and Pb(II) from Co(II) and Cd(II) was observed in two- and multimetal systems. Lastly, prior to their analysis, EDTA- and DTPA-functionalized silica gels were successfully used to preconcentrate metal ions from both pure and salty waters
Resumo:
In this work mathematical programming models for structural and operational optimisation of energy systems are developed and applied to a selection of energy technology problems. The studied cases are taken from industrial processes and from large regional energy distribution systems. The models are based on Mixed Integer Linear Programming (MILP), Mixed Integer Non-Linear Programming (MINLP) and on a hybrid approach of a combination of Non-Linear Programming (NLP) and Genetic Algorithms (GA). The optimisation of the structure and operation of energy systems in urban regions is treated in the work. Firstly, distributed energy systems (DES) with different energy conversion units and annual variations of consumer heating and electricity demands are considered. Secondly, district cooling systems (DCS) with cooling demands for a large number of consumers are studied, with respect to a long term planning perspective regarding to given predictions of the consumer cooling demand development in a region. The work comprises also the development of applications for heat recovery systems (HRS), where paper machine dryer section HRS is taken as an illustrative example. The heat sources in these systems are moist air streams. Models are developed for different types of equipment price functions. The approach is based on partitioning of the overall temperature range of the system into a number of temperature intervals in order to take into account the strong nonlinearities due to condensation in the heat recovery exchangers. The influence of parameter variations on the solutions of heat recovery systems is analysed firstly by varying cost factors and secondly by varying process parameters. Point-optimal solutions by a fixed parameter approach are compared to robust solutions with given parameter variation ranges. In the work enhanced utilisation of excess heat in heat recovery systems with impingement drying, electricity generation with low grade excess heat and the use of absorption heat transformers to elevate a stream temperature above the excess heat temperature are also studied.
Resumo:
Mass-produced paper electronics (large area organic printed electronics on paper-based substrates, “throw-away electronics”) has the potential to introduce the use of flexible electronic applications in everyday life. While paper manufacturing and printing have a long history, they were not developed with electronic applications in mind. Modifications to paper substrates and printing processes are required in order to obtain working electronic devices. This should be done while maintaining the high throughput of conventional printing techniques and the low cost and recyclability of paper. An understanding of the interactions between the functional materials, the printing process and the substrate are required for successful manufacturing of advanced devices on paper. Based on the understanding, a recyclable, multilayer-coated paper-based substrate that combines adequate barrier and printability properties for printed electronics and sensor applications was developed in this work. In this multilayer structure, a thin top-coating consisting of mineral pigments is coated on top of a dispersion-coated barrier layer. The top-coating provides well-controlled sorption properties through controlled thickness and porosity, thus enabling optimizing the printability of functional materials. The penetration of ink solvents and functional materials stops at the barrier layer, which not only improves the performance of the functional material but also eliminates potential fiber swelling and de-bonding that can occur when the solvents are allowed to penetrate into the base paper. The multi-layer coated paper under consideration in the current work consists of a pre-coating and a smoothing layer on which the barrier layer is deposited. Coated fine paper may also be used directly as basepaper, ensuring a smooth base for the barrier layer. The top layer is thin and smooth consisting of mineral pigments such as kaolin, precipitated calcium carbonate, silica or blends of these. All the materials in the coating structure have been chosen in order to maintain the recyclability and sustainability of the substrate. The substrate can be coated in steps, sequentially layer by layer, which requires detailed understanding and tuning of the wetting properties and topography of the barrier layer versus the surface tension of the top-coating. A cost competitive method for industrial scale production is the curtain coating technique allowing extremely thin top-coatings to be applied simultaneously with a closed and sealed barrier layer. The understanding of the interactions between functional materials formulated and applied on paper as inks, makes it possible to create a paper-based substrate that can be used to manufacture printed electronics-based devices and sensors on paper. The multitude of functional materials and their complex interactions make it challenging to draw general conclusions in this topic area. Inevitably, the results become partially specific to the device chosen and the materials needed in its manufacturing. Based on the results, it is clear that for inks based on dissolved or small size functional materials, a barrier layer is beneficial and ensures the functionality of the printed material in a device. The required active barrier life time depends on the solvents or analytes used and their volatility. High aspect ratio mineral pigments, which create tortuous pathways and physical barriers within the barrier layer limit the penetration of solvents used in functional inks. The surface pore volume and pore size can be optimized for a given printing process and ink through a choice of pigment type and coating layer thickness. However, when manufacturing multilayer functional devices, such as transistors, which consist of several printed layers, compromises have to be made. E.g., while a thick and porous top-coating is preferable for printing of source and drain electrodes with a silver particle ink, a thinner and less absorbing surface is required to form a functional semiconducting layer. With the multilayer coating structure concept developed in this work, it was possible to make the paper substrate suitable for printed functionality. The possibility of printing functional devices, such as transistors, sensors and pixels in a roll-to-roll process on paper is demonstrated which may enable introducing paper for use in disposable “onetime use” or “throwaway” electronics and sensors, such as lab-on-strip devices for various analyses, consumer packages equipped with product quality sensors or remote tracking devices.
Resumo:
This thesis presents a set of methods and models for estimation of iron and slag flows in the blast furnace hearth and taphole. The main focus was put on predicting taphole flow patterns and estimating the effects of various taphole conditions on the drainage behavior of the blast furnace hearth. All models were based on a general understanding of the typical tap cycle of an industrial blast furnace. Some of the models were evaluated on short-term process data from the reference furnace. A computational fluid dynamics (CFD) model was built and applied to simulate the complicated hearth flows and thus to predict the regions of the hearth exerted to erosion under various operating conditions. Key boundary variables of the CFD model were provided by a simplified drainage model based on the first principles. By examining the evolutions of liquid outflow rates measured from the furnace studied, the drainage model was improved to include the effects of taphole diameter and length. The estimated slag delays showed good agreement with the observed ones. The liquid flows in the taphole were further studied using two different models and the results of both models indicated that it is more likely that separated flow of iron and slag occurs in the taphole when the liquid outflow rates are comparable during tapping. The drainage process was simulated with an integrated model based on an overall balance analysis: The high in-furnace overpressure can compensate for the resistances induced by the liquid flows in the hearth and through the taphole. Finally, a recently developed multiphase CFD model including interfacial forces between immiscible liquids was developed and both the actual iron-slag system and a water-oil system in laboratory scale were simulated. The model was demonstrated to be a useful tool for simulating hearth flows for gaining understanding of the complex phenomena in the drainage of the blast furnace.
Resumo:
A non isotropic turbulence model is extended and applied to three dimensional stably stratified flows and dispersion calculations. The model is derived from the algebraic stress model (including wall proximity effects), but it retains the simplicity of the "eddy viscosity" concept of first order models. The "modified k-epsilon" is implemented in a three dimensional numerical code. Once the flow is resolved, the predicted velocity and turbulence fields are interpolated into a second grid and used to solve the concentration equation. To evaluate the model, various steady state numerical solutions are compared with small scale dispersion experiments which were conducted at the wind tunnel of Mitsubishi Heavy Industries, in Japan. Stably stratified flows and plume dispersion over three distinct idealized complex topographies (flat and hilly terrain) are studied. Vertical profiles of velocity and pollutant concentration are shown and discussed. Also, comparisons are made against the results obtained with the standard k-epsilon model.
Resumo:
This paper presents an HP-Adaptive Procedure with Hierarchical formulation for the Boundary Element Method in 2-D Elasticity problems. Firstly, H, P and HP formulations are defined. Then, the hierarchical concept, which allows a substantial reduction in the dimension of equation system, is introduced. The error estimator used is based on the residual computation over each node inside an element. Finally, the HP strategy is defined and applied to two examples.
Resumo:
This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.
Resumo:
A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.
Resumo:
Effective control and limiting of carbon dioxide (CO₂) emissions in energy production are major challenges of science today. Current research activities include the development of new low-cost carbon capture technologies, and among the proposed concepts, chemical combustion (CLC) and chemical looping with oxygen uncoupling (CLOU) have attracted significant attention allowing intrinsic separation of pure CO₂ from a hydrocarbon fuel combustion process with a comparatively small energy penalty. Both CLC and CLOU utilize the well-established fluidized bed technology, but several technical challenges need to be overcome in order to commercialize the processes. Therefore, development of proper modelling and simulation tools is essential for the design, optimization, and scale-up of chemical looping-based combustion systems. The main objective of this work was to analyze the technological feasibility of CLC and CLOU processes at different scales using a computational modelling approach. A onedimensional fluidized bed model frame was constructed and applied for simulations of CLC and CLOU systems consisting of interconnected fluidized bed reactors. The model is based on the conservation of mass and energy, and semi-empirical correlations are used to describe the hydrodynamics, chemical reactions, and transfer of heat in the reactors. Another objective was to evaluate the viability of chemical looping-based energy production, and a flow sheet model representing a CLC-integrated steam power plant was developed. The 1D model frame was succesfully validated based on the operation of a 150 kWth laboratory-sized CLC unit fed by methane. By following certain scale-up criteria, a conceptual design for a CLC reactor system at a pre-commercial scale of 100 MWth was created, after which the validated model was used to predict the performance of the system. As a result, further understanding of the parameters affecting the operation of a large-scale CLC process was acquired, which will be useful for the practical design work in the future. The integration of the reactor system and steam turbine cycle for power production was studied resulting in a suggested plant layout including a CLC boiler system, a simple heat recovery setup, and an integrated steam cycle with a three pressure level steam turbine. Possible operational regions of a CLOU reactor system fed by bituminous coal were determined via mass, energy, and exergy balance analysis. Finally, the 1D fluidized bed model was modified suitable for CLOU, and the performance of a hypothetical 500 MWth CLOU fuel reactor was evaluated by extensive case simulations.
Resumo:
In the present paper we discuss the development of "wave-front", an instrument for determining the lower and higher optical aberrations of the human eye. We also discuss the advantages that such instrumentation and techniques might bring to the ophthalmology professional of the 21st century. By shining a small light spot on the retina of subjects and observing the light that is reflected back from within the eye, we are able to quantitatively determine the amount of lower order aberrations (astigmatism, myopia, hyperopia) and higher order aberrations (coma, spherical aberration, etc.). We have measured artificial eyes with calibrated ametropia ranging from +5 to -5 D, with and without 2 D astigmatism with axis at 45º and 90º. We used a device known as the Hartmann-Shack (HS) sensor, originally developed for measuring the optical aberrations of optical instruments and general refracting surfaces in astronomical telescopes. The HS sensor sends information to a computer software for decomposition of wave-front aberrations into a set of Zernike polynomials. These polynomials have special mathematical properties and are more suitable in this case than the traditional Seidel polynomials. We have demonstrated that this technique is more precise than conventional autorefraction, with a root mean square error (RMSE) of less than 0.1 µm for a 4-mm diameter pupil. In terms of dioptric power this represents an RMSE error of less than 0.04 D and 5º for the axis. This precision is sufficient for customized corneal ablations, among other applications.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The purpose of the present study was to examine the factor structure and psychometric properties of the Social Phobia and Anxiety Inventory for Children (SPAI-C), an instrument developed in the United States and applied to a sample of Brazilian schoolchildren. The process included the translation of the original material from English into Portuguese by two bilingual psychiatrists and a back translation by a bilingual physician. Both the front and back translations were revised by a bilingual child psychiatrist. The study was performed using a cross-sectional design and the Portuguese version of the SPAI-C was applied to a sample of 1954 children enrolled in 3rd to 8th grade attending 2 private and 11 public schools. Eighty-one subjects were excluded due to an incomplete questionnaire and 2 children refused to participate. The final sample consisted of 1871 children, 938 girls (50.1%) and 933 boys (49.8%), ranging in age from 9 to 14 years. The majority of the students were Caucasian (89.0%) and the remainder were African-Brazilian (11.0%). The Pearson product-moment correlation showed that the two-week test-retest reliability coefficient was r = 0.780 and Cronbach's alpha was 0.946. The factor structure was almost similar to that reported in previous studies. The results regarding the internal consistency, the test-retest reliability and the factor structure were similar to the findings obtained in studies performed on English speaking children. The present study showed that the Portuguese language version of SPAI-C is a reliable and valid measure of social anxiety for Brazilian children.