914 resultados para analytical approach


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study used a multi-analytical approach based on traditional microbiological methods for cultivation and isolation of heterotrophic bacteria in the laboratory associated with the molecular identification of the isolates and physicochemical analysis of environmental samples. The model chosen for data integration was supported by knowledge from computational neuroscience, and composed by three modules: (i) microbiological parameters, contemplating taxonomic data obtained from the partial sequencing of the 16S rRNA gene from 80 colonies of heterotrophic bacteria isolated by plating method in PCA media. For bacterial colonies isolation were used water samples from Atibaia and Jaguarí rivers collected at the site of water captation for use in effluent treatment, upstream from the entrance of treated effluent from the Paulínia refinery (REPLAN/Petrobras) located in the Paulínia-SP municipality, from the output of the biological treatment plant with stabilization pond and from the raw refinery wastewater; (ii) chemical parameters, ending measures of dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), chloride, acidity CaCO3, alkalinity, ammonia, nitrite, nitrate, dissolved ions, sulfides, oils and greases; and (iii) physical parameters, comprising the pH determination, conductivity, temperature, transparency, settleable solids, suspended and soluble solids, volatile material, remaining fixing material (RFM), apparent color and turbidity. The results revealed interesting theoretical relationships involving two families of bacteria (Carnobacteriaceae and Aeromonadaceae). Carnobacteriaceae revealed positive theoretical relationships with COD, BOD, nitrate, chloride, temperature, conductivity and apparent color and negative theoretical relationships with the OD. Positive theoretical relationships were shown between Aeromonadaceae and OD and nitrate, while this bacterial family showed negative theoretical...

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Guiana Shield (GS) is one of the most pristine regions of Amazonia and biologically one of the richest areas on Earth. How and when this massive diversity arose remains the subject of considerable debate. The prevailing hypothesis of Quaternary glacial refugia suggests that a part of the eastern GS, among other areas in Amazonia, served as stable forested refugia during periods of aridity. However, the recently proposed disturbance-vicariance hypothesis proposes that fluctuations in temperature on orbital timescales, with some associated aridity, have driven Neotropical diversification. The expectations of the temporal and spatial organization of biodiversity differ between these two hypotheses. Here, we compare the genetic structure of 12 leaf-litter inhabiting frog species from the GS lowlands using a combination of mitochondrial and nuclear sequences in an integrative analytical approach that includes phylogenetic reconstructions, molecular dating, and Geographic Information System methods. This comparative and integrated approach overcomes the well-known limitations of phylogeographic inference based on single species and single loci. All of the focal species exhibit distinct phylogeographic patterns highlighting taxon-specific historical distributions, ecological tolerances to climatic disturbance, and dispersal abilities. Nevertheless, all but one species exhibit a history of fragmentation/isolation within the eastern GS during the Quaternary with spatial and temporal concordance among species. The signature of isolation in northern French Guiana (FG) during the early Pleistocene is particularly clear. Approximate Bayesian Computation supports the synchrony of the divergence between northern FG and other GS lineages. Substructure observed throughout the GS suggests further Quaternary fragmentation and a role for rivers. Our findings support fragmentation of moist tropical forest in the eastern GS during this period when the refuge hypothesis would have the region serving as a contiguous wet-forest refuge.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Companies are currently choosing to integrate logics and systems to achieve better solutions. These combinations also include companies striving to join the logic of material requirement planning (MRP) system with the systems of lean production. The purpose of this article was to design an MRP as part of the implementation of an enterprise resource planning (ERP) in a company that produces agricultural implements, which has used the lean production system since 1998. This proposal is based on the innovation theory, theory networks, lean production systems, ERP systems and the hybrid production systems, which use both components and MRP systems, as concepts of lean production systems. The analytical approach of innovation networks enables verification of the links and relationships among the companies and departments of the same corporation. The analysis begins with the MRP implementation project carried out in a Brazilian metallurgical company and follows through the operationalisation of the MRP project, until its production stabilisation. The main point is that the MRP system should help the company's operations with regard to its effective agility to respond in time to demand fluctuations, facilitating the creation process and controlling the branch offices in other countries that use components produced in the matrix, hence ensuring more accurate estimates of stockpiles. Consequently, it presents the enterprise knowledge development organisational modelling methodology in order to represent further models (goals, actors and resources, business rules, business process and concepts) that should be included in this MRP implementation process for the new configuration of the production system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims. We construct a theoretical model to predict the number of orphan afterglows (OA) from gamma-ray bursts (GRBs) triggered by primordial metal-free (Pop III) stars expected to be observed by the Gaia mission. In particular, we consider primordial metal-free stars that were affected by radiation from other stars (Pop III. 2) as a possible target. Methods. We use a semi-analytical approach that includes all relevant feedback effects to construct cosmic star formation history and its connection with the cumulative number of GRBs. The OA events are generated using the Monte Carlo method, and realistic simulations of Gaia's scanning law are performed to derive the observation probability expectation. Results. We show that Gaia can observe up to 2.28 +/- 0.88 off-axis afterglows and 2.78 +/- 1.41 on-axis during the five-year nominal mission. This implies that a nonnegligible percentage of afterglows that may be observed by Gaia (similar to 10%) could have Pop III stars as progenitors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background The genetic mechanisms underlying interindividual blood pressure variation reflect the complex interplay of both genetic and environmental variables. The current standard statistical methods for detecting genes involved in the regulation mechanisms of complex traits are based on univariate analysis. Few studies have focused on the search for and understanding of quantitative trait loci responsible for gene × environmental interactions or multiple trait analysis. Composite interval mapping has been extended to multiple traits and may be an interesting approach to such a problem. Methods We used multiple-trait analysis for quantitative trait locus mapping of loci having different effects on systolic blood pressure with NaCl exposure. Animals studied were 188 rats, the progenies of an F2 rat intercross between the hypertensive and normotensive strain, genotyped in 179 polymorphic markers across the rat genome. To accommodate the correlational structure from measurements taken in the same animals, we applied univariate and multivariate strategies for analyzing the data. Results We detected a new quantitative train locus on a region close to marker R589 in chromosome 5 of the rat genome, not previously identified through serial analysis of individual traits. In addition, we were able to justify analytically the parametric restrictions in terms of regression coefficients responsible for the gain in precision with the adopted analytical approach. Conclusion Future work should focus on fine mapping and the identification of the causative variant responsible for this quantitative trait locus signal. The multivariable strategy might be valuable in the study of genetic determinants of interindividual variation of antihypertensive drug effectiveness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work described in this thesis had two objectives. The first objective was to develop a physically based computational model that could be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. The second objective was to determine how the secondary phase inclusions observed in Pb1-xSnxTe alloys made by consolidating mechanically alloyed elemental powders impact the ability of the material to harvest waste heat and generate electricity in the 400 K to 700 K temperature range. The motivation for this work was that though the promise of this alloy as an unusually efficient thermoelectric power generator material in the 400 K to 700 K range had been demonstrated in the literature, methods to reproducibly control and subsequently optimize the materials thermoelectric figure of merit remain elusive. Mechanical alloying, though not typically used to fabricate these alloys, is a potential method for cost-effectively engineering these properties. Given that there are deviations from crystalline perfection in mechanically alloyed material such as secondary phase inclusions, the question arises as to whether these defects are detrimental to thermoelectric function or alternatively, whether they enhance thermoelectric function of the alloy. The hypothesis formed at the onset of this work was that the small secondary phase SnO2 inclusions observed to be present in the mechanically alloyed Pb1-xSnxTe would increase the thermoelectric figure of merit of the material over the temperature range of interest. It was proposed that the increase in the figure of merit would arise because the inclusions in the material would not reduce the electrical conductivity to as great an extent as the thermal conductivity. If this were to be true, then the experimentally measured electronic conductivity in mechanically alloyed Pb1-xSnxTe alloys that have these inclusions would not be less than that expected in alloys without these inclusions while the portion of the thermal conductivity that is not due to charge carriers (the lattice thermal conductivity) would be less than what would be expected from alloys that do not have these inclusions. Furthermore, it would be possible to approximate the observed changes in the electrical and thermal transport properties using existing physical models for the scattering of electrons and phonons by small inclusions. The approach taken to investigate this hypothesis was to first experimentally characterize the mobile carrier concentration at room temperature along with the extent and type of secondary phase inclusions present in a series of three mechanically alloyed Pb1-xSnxTe alloys with different Sn content. Second, the physically based computational model was developed. This model was used to determine what the electronic conductivity, Seebeck coefficient, total thermal conductivity, and the portion of the thermal conductivity not due to mobile charge carriers would be in these particular Pb1-xSnxTe alloys if there were to be no secondary phase inclusions. Third, the electronic conductivity, Seebeck coefficient and total thermal conductivity was experimentally measured for these three alloys with inclusions present at elevated temperatures. The model predictions for electrical conductivity and Seebeck coefficient were directly compared to the experimental elevated temperature electrical transport measurements. The computational model was then used to extract the lattice thermal conductivity from the experimentally measured total thermal conductivity. This lattice thermal conductivity was then compared to what would be expected from the alloys in the absence of secondary phase inclusions. Secondary phase inclusions were determined by X-ray diffraction analysis to be present in all three alloys to a varying extent. The inclusions were found not to significantly degrade electrical conductivity at temperatures above ~ 400 K in these alloys, though they do dramatically impact electronic mobility at room temperature. It is shown that, at temperatures above ~ 400 K, electrons are scattered predominantly by optical and acoustical phonons rather than by an alloy scattering mechanism or the inclusions. The experimental electrical conductivity and Seebeck coefficient data at elevated temperatures were found to be within ~ 10 % of what would be expected for material without inclusions. The inclusions were not found to reduce the lattice thermal conductivity at elevated temperatures. The experimentally measured thermal conductivity data was found to be consistent with the lattice thermal conductivity that would arise due to two scattering processes: Phonon phonon scattering (Umklapp scattering) and the scattering of phonons by the disorder induced by the formation of a PbTe-SnTe solid solution (alloy scattering). As opposed to the case in electrical transport, the alloy scattering mechanism in thermal transport is shown to be a significant contributor to the total thermal resistance. An estimation of the extent to which the mean free time between phonon scattering events would be reduced due to the presence of the inclusions is consistent with the above analysis of the experimental data. The first important result of this work was the development of an experimentally validated, physically based computational model that can be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. This model will be critical in future work as a tool to first determine what the highest thermoelectric figure of merit one can expect from this alloy system at a given temperature and, second, as a tool to determine the optimum Sn content and doping level to achieve this figure of merit. The second important result of this work is the determination that the secondary phase inclusions that were observed to be present in the Pb1-xSnxTe made by mechanical alloying do not keep the material from having the same electrical and thermal transport that would be expected from “perfect" single crystal material at elevated temperatures. The analytical approach described in this work will be critical in future investigations to predict how changing the size, type, and volume fraction of secondary phase inclusions can be used to impact thermal and electrical transport in this materials system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Capillary zone electrophoresis (CZE) in fused-silica capillaries is an effective analytical approach for the separation and determination of the transferrin (Tf) isoforms and thus carbohydrate-deficient transferrin (CDT) in human serum. Sera of patients with progressed liver cirrhosis are prone to interferences in the beta region which prevent the proper determination of CDT by CZE without additional sample preparation. Efforts to identify, reduce or even eliminate these interferences have been undertaken. Data obtained by ultrafiltration, affinity subtraction procedures using protein A, protein L and antibodies against immunoglobulins or Tf, and immunopurification of Tf suggest that the interferences in the patient sera are caused by increased levels of IgA and IgM and are best eliminated by immunopurification. Avian IgY antibody spin column immunocapture of serum Tf followed by CZE analysis of the stripped and concentrated fraction is shown to provide an attractive approach for CDT monitoring in sera with beta region interferences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy efficiency has become an important research topic in intralogistics. Especially in this field the focus is placed on automated storage and retrieval systems (AS/RS) utilizing stacker cranes as these systems are widespread and consume a significant portion of the total energy demand of intralogistical systems. Numerical simulation models were developed to calculate the energy demand rather precisely for discrete single and dual command cycles. Unfortunately these simulation models are not suitable to perform fast calculations to determine a mean energy demand value of a complete storage aisle. For this purpose analytical approaches would be more convenient but until now analytical approaches only deliver results for certain configurations. In particular, for commonly used stacker cranes equipped with an intermediate circuit connection within their drive configuration there is no analytical approach available to calculate the mean energy demand. This article should address this research gap and present a calculation approach which enables planners to quickly calculate the energy demand of these systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most published genomewide association studies (GWAS) in sheep have investigated recessively inherited monogenic traits. The objective here was to assess the feasibility of performing GWAS for a dominant trait for which the genetic basis was already known. A total of 42 Manchega and Rasa Aragonesa sheep that segregate solid black or white coat pigmentation were genotyped using the SNP50 BeadChip. Previous analysis in Manchegas demonstrated a complete association between the pigmentation trait and alleles of the MC1R gene, setting an a priori expectation for GWAS. Multiple methods were used to identify and quantify the strength of population substructure between black and white animals, before allelic association testing was performed for 49 034 SNPs. Following correction for substructure, GWAS identified the most strongly associated SNP (s26449) was also the closest to the MC1R gene. The finding was strongly supported by the permutation tree-based random forest (RF) analysis. Importantly, GWAS identified unlinked SNP with only slightly lower p-values than for s26449. Random forest analysis indicated these were false positives, suggesting interpretation based on both approaches was beneficial. The results indicate that a combined analytical approach can be successful in studies where a modest number of animals are available and substantial population stratification exists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Pathology studies have shown delayed arterial healing in culprit lesions of patients with acute coronary syndrome (ACS) compared with stable coronary artery disease (CAD) after placement of drug-eluting stents (DES). It is unknown whether similar differences exist in-vivo during long-term follow-up. Using optical coherence tomography (OCT), we assessed differences in arterial healing between patients with ACS and stable CAD five years after DES implantation. METHODS AND RESULTS A total of 88 patients comprised of 53 ACS lesions with 7864 struts and 35 stable lesions with 5298 struts were suitable for final OCT analysis five years after DES implantation. The analytical approach was based on a hierarchical Bayesian random-effects model. OCT endpoints were strut coverage, malapposition, protrusion, evaginations and cluster formation. Uncovered (1.7% vs. 0.7%, adjusted p=0.041) or protruding struts (0.50% vs. 0.13%, adjusted p=0.038) were more frequent among ACS compared with stable CAD lesions. A similar trend was observed for malapposed struts (1.33% vs. 0.45%, adj. p=0.072). Clusters of uncovered or malapposed/protruding struts were present in 34.0% of ACS and 14.1% of stable patients (adj. p=0.041). Coronary evaginations were more frequent in patients with ST-elevation myocardial infarction compared with stable CAD patients (0.16 vs. 0.13 per cross section, p=0.027). CONCLUSION Uncovered, malapposed, and protruding stent struts as well as clusters of delayed healing may be more frequent in culprit lesions of ACS compared with stable CAD patients late after DES implantation. Our observational findings suggest a differential healing response attributable to lesion characteristics of patients with ACS compared with stable CAD in-vivo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, the development of a new sensitive method for the analysis of alpha-dicarbonyls glyoxal (G) and methylglyoxal (MG) in environmental ice and snow is presented. Stir bar sorptive extraction with in situ derivatization and liquid desorption (SBSE-LD) was used for sample extraction, enrichment, and derivatization. Measurements were carried out using high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). As part of the method development, SBSE-LD parameters such as extraction time, derivatization reagent, desorption time and solvent, and the effect of NaCl addition on the SBSE efficiency as well as measurement parameters of HPLC-ESI-MS/MS were evaluated. Calibration was performed in the range of 1–60 ng/mL using spiked ultrapure water samples, thus incorporating the complete SBSE and derivatization process. 4-Fluorobenzaldehyde was applied as internal standard. Inter-batch precision was <12 % RSD. Recoveries were determined by means of spiked snow samples and were 78.9 ± 5.6 % for G and 82.7 ± 7.5 % for MG, respectively. Instrumental detection limits of 0.242 and 0.213 ng/mL for G and MG were achieved using the multiple reaction monitoring mode. Relative detection limits referred to a sample volume of 15 mL were 0.016 ng/mL for G and 0.014 ng/mL for MG. The optimized method was applied for the analysis of snow samples from Mount Hohenpeissenberg (close to the Meteorological Observatory Hohenpeissenberg, Germany) and samples from an ice core from Upper Grenzgletscher (Monte Rosa massif, Switzerland). Resulting concentrations were 0.085–16.3 ng/mL for G and 0.126–3.6 ng/mL for MG. Concentrations of G and MG in snow were 1–2 orders of magnitude higher than in ice core samples. The described method represents a simple, green, and sensitive analytical approach to measure G and MG in aqueous environmental samples.