207 resultados para Zuccotti, Susan: Under his very windows
Resumo:
A thermodynamic approach to predict bulk glass-forming compositions in binary metallic systems was recently proposed. In this approach. the parameter gamma* = Delta H-amor/(Delta H-inter - Delta H-amor) indicates the glass-forming ability (GFA) from the standpoint of the driving force to form different competing phases, and Delta H-amor and Delta H-inter are the enthalpies for-lass and intermetallic formation, respectively. Good glass-forming compositions should have a large negative enthalpy for glass formation and a very small difference for intermetallic formation, thus making the glassy phase easily reachable even under low cooling rates. The gamma* parameter showed a good correlation with GFA experimental data in the Ni-Nb binary system. In this work, a simple extension of the gamma* parameter is applied in the ternary Al-Ni-Y system. The calculated gamma* isocontours in the ternary diagram are compared with experimental results of glass formation in that system. Despite sonic misfitting, the best glass formers are found quite close to the highest gamma* values, leading to the conclusion that this thermodynamic approach can lie extended to ternary systems, serving as a useful tool for the development of new glass-forming compositions. Finally the thermodynamic approach is compared with the topological instability criteria used to predict the thermal behavior of glassy Al alloys. (C) 2007 Elsevier B. V. All rights reserved.
Resumo:
Oxy-coal combustion is a viable technology, for new and existing coal-fired power plants, as it facilitates carbon capture and, thereby, can mitigate climate change. Pulverized coals of various ranks, biomass, and their blends were burned to assess the evolution of combustion effluent gases, such as NO(x), SO(2), and CO, under a variety of background gas compositions. The fuels were burned in an electrically heated laboratory drop-tube furnace in O(2)/N(2) and O(2)/CO(2) environments with oxygen mole fractions of 20%, 40%, 60%, 80%, and 100%, at a furnace temperature of 1400 K. The fuel mass flow rate was kept constant in most cases, and combustion was fuel-lean. Results showed that in the case of four coals studied, NO(x) emissions in O(2)/CO(2) environments were lower than those in O(2)/N(2) environments by amounts that ranged from 19 to 43% at the same oxygen concentration. In the case of bagasse and coal/bagasse blends, the corresponding NO(x) reductions ranged from 22 to 39%. NO(x) emissions were found to increase with increasing oxygen mole fraction until similar to 50% O(2) was reached; thereafter, they monotonically decreased with increasing oxygen concentration. NO(x) emissions from the various fuels burned did not clearly reflect their nitrogen content (0.2-1.4%), except when large content differences were present. SO(2) emissions from all fuels remained largely unaffected by the replacement of the N(2) diluent gas with CO(2), whereas they typically increased with increasing sulfur content of the fuels (0.07-1.4%) and decreased with increasing calcium content of the fuels (0.28-2.7%). Under the conditions of this work, 20-50% of the fuel-nitrogen was converted to NO(x). The amount of fuel-sulfur converted to SO(2) varied widely, depending on the fuel and, in the case of the bituminous coal, also depending on the O(2) mole fraction. Blending the sub-bituminous coal with bagasse reduced its SO(2) yields, whereas blending the bituminous coal with bagasse reduced both its SO(2) and NO(x) yields. CO emissions were generally very low in all cases. The emission trends were interpreted on the basis of separate combustion observations.
Resumo:
In the present work the squeeze flow technique was used to evaluate the rheological behavior of cement-based mortars containing macroscopic aggregates up to 1.2 mm. Compositions with different water and air contents were tested at three squeezing rates (0.01, 0.1 and 1 mm/s) 15 and 60 min after mixing. The mortars prepared with low (13 wt.%) and usual water content (15 wt.%) presented opposite behaviors as a function of elapsed time and squeezing speed. The first lost its cohesion with time and required higher loads when squeezed faster, while the latter became stiffer with time and was more difficult to be squeezed slowly as a result of phase segregation. Due to the increase of air content, the effects of this compressible phase became more significant and a more complex behavior was observed. Rheological properties such as elongational viscosity and yield stress were also determined. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents results of research related to multicriteria decision making under information uncertainty. The Bell-man-Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (< X, M > models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called < X, R > models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The most-used refrigeration system is the vapor-compression system. In this cycle, the compressor is the most complex and expensive component, especially the reciprocating semihermetic type, which is often used in food product conservation. This component is very sensitive to variations in its operating conditions. If these conditions reach unacceptable levels, failures are practically inevitable. Therefore, maintenance actions should be taken in order to maintain good performance of such compressors and to avoid undesirable stops of the system. To achieve such a goal, one has to evaluate the reliability of the system and/or the components. In this case, reliability means the probability that some equipment cannot perform their requested functions for an established time period, under defined operating conditions. One of the tools used to improve component reliability is the failure mode and effect analysis (FMEA). This paper proposes that the methodology of FMEA be used as a tool to evaluate the main failures found in semihermetic reciprocating compressors used in refrigeration systems. Based on the results, some suggestions for maintenance are addressed.
Resumo:
This paper presents concentration inequalities and laws of large numbers under weak assumptions of irrelevance that are expressed using lower and upper expectations. The results build upon De Cooman and Miranda`s recent inequalities and laws of large numbers. The proofs indicate connections between the theory of martingales and concepts of epistemic and regular irrelevance. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
This work presents a comparison between laser weld (LBW) and electric resistance spot weld (ERSW) processes used for assemblies of components in a body-in-white (BIW) at a world class automotive industry. It is carried out by evaluating the mechanical strength modeled both by experimental and numerical methods. An ""Arcan"" multiaxial test was designed and manufactured in order to enable 0 degrees, 45 degrees and 90 degrees directional loadings. The welded specimens were uncoated low carbon steel sheets (S-y = 170 MPa) used currently at the automotive industry, with two different thicknesses: 0.80 and 1.20 mm. A numerical analysis was carried out using the finite element method (FEM) through LS-DYNA code. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The effect of different precracking methods on the results of linear elastic K(Ic) fracture toughness testing with medium-density polyethylene (MDPE) was investigated. Cryogenic conditions were imposed in order to obtain valid K(Ic) values from specimens of suitable size. Most conservative K(Ic) values were obtained by slow pressing a fresh razor blade at the notch root of the specimen. Due to the low deformation level imposed on the crack tip region, the slow pressing razor blade technique also produced less scatter in fracture toughness results. It has been shown that the slow stable crack growth preceding catastrophic brittle failure during K(Ic) tests in MOPE under cryogenic conditions should not be disregarded as it has relevant physical meaning and may affect the fracture toughness results. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In the present work, the sensitivity of NIR spectroscopy toward the evolution of particle size was studied during emulsion homopolymerization of styrene (Sty) and emulsion copolymerization of vinyl acetate-butyl acrylate conducted in a semibatch stirred tank and a tubular pulsed sieve plate reactor, respectively. All NIR spectra were collected online with a transflectance probe immersed into the reaction medium. The spectral range used for the NIR monitoring was from 9 500 to 13 000 cm(-1), where the absorbance of the chemical components present is minimal and the changes in the NIR spectrum can be ascribed to the effects of light scattering by the polymer particles. Off-line measurements of the average diameter of the polymer particles by DLS were used as reference values for the development of the multi-variate NIR calibration models based on partial least squares. Results indicated that, in the spectral range studied, it is possible to monitor the evolution of the average size of the polymer particles during emulsion polymerization reactions. The inclusion of an additional spectral range, from 5 701 to 6 447 cm(-1), containing information on absorbances (""chemical information"") in the calibration models was also evaluated.
Resumo:
A simple calorimetric method to estimate both kinetics and heat transfer coefficients using temperature-versus-time data under non-adiabatic conditions is described for the reaction of hydrolysis of acetic anhydride. The methodology is applied to three simple laboratory-scale reactors in a very simple experimental setup that can be easily implemented. The quality of the experimental results was verified by comparing them with literature values and with predicted values obtained by energy balance. The comparison shows that the experimental kinetic parameters do not agree exactly with those reported in the literature, but provide a good agreement between predicted and experimental data of temperature and conversion. The differences observed between the activation energy obtained and the values reported in the literature can be ascribed to differences in anhydride-to-water ratios (anhydride concentrations). (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
S2 cell populations (S2AcRVGP2K and S2MtRVGP-Hy) were selected after transfection of gene expression vectors carrying the cDNA encoding the rabies virus glycoprotein (RVGP) gene under the control of the constitutive (actin) or inductive (metallothionein) promoters. These cell populations were cultivated in a 1 L bioreactor mimicking a large scale bioprocess. Cell cultures were carried out at 90 rpm and monitored/controlled for temperature (28 degrees C) and dissolved oxygen (10 or 50% air saturation). Cell growth attained similar to 1.5-3 x 10(7) cells/mL after 3-4 clays of cultivation. The constitutive synthesis of RVGP in S2AcRVGP2K cells led to values of 0.76 mu g/10(7) cells at day 4 of culture. The RVGP synthesis in S2MtRVGP-Hy cell fraction increased upon CuSO(4) induction attaining specific productivities of 1.5-2 mu g/10(7) cells at clays 4-5. RVGP values in supernatant as a result of cell lysis were always very low (<0.2 mu g/mL) indicating good integrity of cells in culture. Overall the RVGP productivity was of 1.5-3 mg/L. Our data showed an important influence of dissolved oxygen on RVGP synthesis allowing a higher and sustained productivity by S2MtRVGP-Hy cells when cultivated with a DO of 10% air saturation. The RVGP productivity in bioreactors shown here mirrors those previously observed for T-flasks and shaker bottles and allow the preparation of the large RVGP quantities required for studies of structure and function. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.