976 resultados para Macro scale distributions
Resumo:
Pós-graduação em Geografia - IGCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
INVESTIGATION INTO CURRENT EFFICIENCY FOR PULSE ELECTROCHEMICAL MACHINING OF NICKEL ALLOY Yu Zhang, M.S. University of Nebraska, 2010 Adviser: Kamlakar P. Rajurkar Electrochemical machining (ECM) is a nontraditional manufacturing process that can machine difficult-to-cut materials. In ECM, material is removed by controlled electrochemical dissolution of an anodic workpiece in an electrochemical cell. ECM has extensive applications in automotive, petroleum, aerospace, textile, medical, and electronics industries. Improving current efficiency is a challenging task for any electro-physical or electrochemical machining processes. The current efficiency is defined as the ratio of the observed amount of metal dissolved to the theoretical amount predicted from Faraday’s law, for the same specified conditions of electrochemical equivalent, current, etc [1]. In macro ECM, electrolyte conductivity greatly influences the current efficiency of the process. Since there is a certain limit to enhance the conductivity of the electrolyte, a process innovation is needed for further improvement in current efficiency in ECM. Pulse electrochemical machining (PECM) is one such approach in which the electrolyte conductivity is improved by electrolyte flushing in pulse off-time. The aim of this research is to study the influence of major factors on current efficiency in a pulse electrochemical machining process in macro scale and to develop a linear regression model for predicting current efficiency of the process. An in-house designed electrochemical cell was used for machining nickel alloy (ASTM B435) by PECM. The effects of current density, type of electrolyte, and electrolyte flow rate, on current efficiency under different experimental conditions were studied. Results indicated that current efficiency is dependent on electrolyte, electrolyte flow rate, and current density. Linear regression models of current efficiency were compared with twenty new data points graphically and quantitatively. Models developed were close enough to the actual results to be reliable. In addition, an attempt has been made in this work to consider those factors in PECM that have not been investigated in earlier works. This was done by simulating the process by using COMSOL software. However, it was found that the results from this attempt were not substantially different from the earlier reported studies.
Resumo:
The research work was aimed at studying, with a deterministic approach, the relationships between the rock’s texture and its mechanical properties determined at the laboratory scale. The experimentation was performed on a monomineralic crystalline rock, varying in texture, i.e. grains shape. Multi-scale analysis has been adopted to determine the elasto-mechanical properties of the crystals composing the rock and its strength and deformability at the macro-scale. This let us to understand how the structural variability of the investigated rock affects its macromechanical behaviour. Investigations have been performed on three different scales: nano-scale (order of nm), micro-scale (tens of m) and macro-scale (cm). Innovative techniques for rock mechanics, i.e. Depth Sensing Indentation (DSI), have been applied, in order to determine the elasto-mechanical properties of the calcite grains. These techniques have also allowed to study the influence of grain boundaries on the mechanical response of calcite grains by varying the indents’ sizes and to quantify the effect of the applied load on the hardness and elastic modulus of the grain (indentation size effect, ISE). The secondary effects of static indentation Berkovich, Vickers and Knoop were analyzed by SEM, and some considerations on the rock’s brittle behaviour and the effect of microcracks can be made.
Resumo:
I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.
Resumo:
The last decade has witnessed very fast development in microfabrication technologies. The increasing industrial applications of microfluidic systems call for more intensive and systematic knowledge on this newly emerging field. Especially for gaseous flow and heat transfer at microscale, the applicability of conventional theories developed at macro scale is not yet completely validated; this is mainly due to scarce experimental data available in literature for gas flows. The objective of this thesis is to investigate these unclear elements by analyzing forced convection for gaseous flows through microtubes and micro heat exchangers. Experimental tests have been performed with microtubes having various inner diameters, namely 750 m, 510 m and 170 m, over a wide range of Reynolds number covering the laminar region, the transitional zone and also the onset region of the turbulent regime. The results show that conventional theory is able to predict the flow friction factor when flow compressibility does not appear and the effect of fluid temperature-dependent properties is insignificant. A double-layered microchannel heat exchanger has been designed in order to study experimentally the efficiency of a gas-to-gas micro heat exchanger. This microdevice contains 133 parallel microchannels machined into polished PEEK plates for both the hot side and the cold side. The microchannels are 200 µm high, 200 µm wide and 39.8 mm long. The design of the micro device has been made in order to be able to test different materials as partition foil with flexible thickness. Experimental tests have been carried out for five different partition foils, with various mass flow rates and flow configurations. The experimental results indicate that the thermal performance of the countercurrent and cross flow micro heat exchanger can be strongly influenced by axial conduction in the partition foil separating the hot gas flow and cold gas flow.
Resumo:
Ocean acidification might reduce the ability of calcifying plankton to produce and maintain their shells of calcite, or of aragonite, the more soluble form of CaCO3. In addition to possibly large biological impacts, reduced CaCO3 production corresponds to a negative feedback on atmospheric CO2. In order to explore the sensitivity of the ocean carbon cycle to increasing concentrations of atmospheric CO2, we use the new biogeochemical Bern3D/PISCES model. The model reproduces the large scale distributions of biogeochemical tracers. With a range of sensitivity studies, we explore the effect of (i) using different parameterizations of CaCO3 production fitted to available laboratory and field experiments, of (ii) letting calcite and aragonite be produced by auto- and heterotrophic plankton groups, and of (iii) using carbon emissions from the range of the most recent IPCC Representative Concentration Pathways (RCP). Under a high-emission scenario, the CaCO3 production of all the model versions decreases from ~1 Pg C yr−1 to between 0.36 and 0.82 Pg C yr−1 by the year 2100. The changes in CaCO3 production and dissolution resulting from ocean acidification provide only a small feedback on atmospheric CO2 of −1 to −11 ppm by the year 2100, despite the wide range of parameterizations, model versions and scenarios included in our study. A potential upper limit of the CO2-calcification/dissolution feedback of −30 ppm by the year 2100 is computed by setting calcification to zero after 2000 in a high 21st century emission scenario. The similarity of feedback estimates yielded by the model version with calcite produced by nanophytoplankton and the one with calcite, respectively aragonite produced by mesozooplankton suggests that expending biogeochemical models to calcifying zooplankton might not be needed to simulate biogeochemical impacts on the marine carbonate cycle. The changes in saturation state confirm previous studies indicating that future anthropogenic CO2 emissions may lead to irreversible changes in ΩA for several centuries. Furthermore, due to the long-term changes in the deep ocean, the ratio of open water CaCO3 dissolution to production stabilizes by the year 2500 at a value that is 30–50% higher than at pre-industrial times when carbon emissions are set to zero after 2100.
Resumo:
Materials are inherently multi-scale in nature consisting of distinct characteristics at various length scales from atoms to bulk material. There are no widely accepted predictive multi-scale modeling techniques that span from atomic level to bulk relating the effects of the structure at the nanometer (10-9 meter) on macro-scale properties. Traditional engineering deals with treating matter as continuous with no internal structure. In contrast to engineers, physicists have dealt with matter in its discrete structure at small length scales to understand fundamental behavior of materials. Multiscale modeling is of great scientific and technical importance as it can aid in designing novel materials that will enable us to tailor properties specific to an application like multi-functional materials. Polymer nanocomposite materials have the potential to provide significant increases in mechanical properties relative to current polymers used for structural applications. The nanoscale reinforcements have the potential to increase the effective interface between the reinforcement and the matrix by orders of magnitude for a given reinforcement volume fraction as relative to traditional micro- or macro-scale reinforcements. To facilitate the development of polymer nanocomposite materials, constitutive relationships must be established that predict the bulk mechanical properties of the materials as a function of the molecular structure. A computational hierarchical multiscale modeling technique is developed to study the bulk-level constitutive behavior of polymeric materials as a function of its molecular chemistry. Various parameters and modeling techniques from computational chemistry to continuum mechanics are utilized for the current modeling method. The cause and effect relationship of the parameters are studied to establish an efficient modeling framework. The proposed methodology is applied to three different polymers and validated using experimental data available in literature.
Resumo:
Advances in information technology and global data availability have opened the door for assessments of sustainable development at a truly macro scale. It is now fairly easy to conduct a study of sustainability using the entire planet as the unit of analysis; this is precisely what this work set out to accomplish. The study began by examining some of the best known composite indicator frameworks developed to measure sustainability at the country level today. Most of these were found to value human development factors and a clean local environment, but to gravely overlook consumption of (remote) resources in relation to nature’s capacity to renew them, a basic requirement for a sustainable state. Thus, a new measuring standard is proposed, based on the Global Sustainability Quadrant approach. In a two‐dimensional plot of nations’ Human Development Index (HDI) vs. their Ecological Footprint (EF) per capita, the Sustainability Quadrant is defined by the area where both dimensions satisfy the minimum conditions of sustainable development: an HDI score above 0.8 (considered ‘high’ human development), and an EF below the fair Earth‐share of 2.063 global hectares per person. After developing methods to identify those countries that are closest to the Quadrant in the present‐day and, most importantly, those that are moving towards it over time, the study tackled the question: what indicators of performance set these countries apart? To answer this, an analysis of raw data, covering a wide array of environmental, social, economic, and governance performance metrics, was undertaken. The analysis used country rank lists for each individual metric and compared them, using the Pearson Product Moment Correlation function, to the rank lists generated by the proximity/movement relative to the Quadrant measuring methods. The analysis yielded a list of metrics which are, with a high degree of statistical significance, associated with proximity to – and movement towards – the Quadrant; most notably: Favorable for sustainable development: use of contraception, high life expectancy, high literacy rate, and urbanization. Unfavorable for sustainable development: high GDP per capita, high language diversity, high energy consumption, and high meat consumption. A momentary gain, but a burden in the long‐run: high carbon footprint and debt. These results could serve as a solid stepping stone for the development of more reliable composite index frameworks for assessing countries’ sustainability.
Resumo:
Snow in the environment acts as a host to rich chemistry and provides a matrix for physical exchange of contaminants within the ecosystem. The goal of this review is to summarise the current state of knowledge of physical processes and chemical reactivity in surface snow with relevance to polar regions. It focuses on a description of impurities in distinct compartments present in surface snow, such as snow crystals, grain boundaries, crystal surfaces, and liquid parts. It emphasises the microscopic description of the ice surface and its link with the environment. Distinct differences between the disordered air–ice interface, often termed quasi-liquid layer, and a liquid phase are highlighted. The reactivity in these different compartments of surface snow is discussed using many experimental studies, simulations, and selected snow models from the molecular to the macro-scale. Although new experimental techniques have extended our knowledge of the surface properties of ice and their impact on some single reactions and processes, others occurring on, at or within snow grains remain unquantified. The presence of liquid or liquid-like compartments either due to the formation of brine or disorder at surfaces of snow crystals below the freezing point may strongly modify reaction rates. Therefore, future experiments should include a detailed characterisation of the surface properties of the ice matrices. A further point that remains largely unresolved is the distribution of impurities between the different domains of the condensed phase inside the snowpack, i.e. in the bulk solid, in liquid at the surface or trapped in confined pockets within or between grains, or at the surface. While surface-sensitive laboratory techniques may in the future help to resolve this point for equilibrium conditions, additional uncertainty for the environmental snowpack may be caused by the highly dynamic nature of the snowpack due to the fast metamorphism occurring under certain environmental conditions. Due to these gaps in knowledge the first snow chemistry models have attempted to reproduce certain processes like the long-term incorporation of volatile compounds in snow and firn or the release of reactive species from the snowpack. Although so far none of the models offers a coupled approach of physical and chemical processes or a detailed representation of the different compartments, they have successfully been used to reproduce some field experiments. A fully coupled snow chemistry and physics model remains to be developed.
Resumo:
While ecological monitoring and biodiversity assessment programs are widely implemented and relatively well developed to survey and monitor the structure and dynamics of populations and communities in many ecosystems, quantitative assessment and monitoring of genetic and phenotypic diversity that is important to understand evolutionary dynamics is only rarely integrated. As a consequence, monitoring programs often fail to detect changes in these key components of biodiversity until after major loss of diversity has occurred. The extensive efforts in ecological monitoring have generated large data sets of unique value to macro-scale and long-term ecological research, but the insights gained from such data sets could be multiplied by the inclusion of evolutionary biological approaches. We argue that the lack of process-based evolutionary thinking in ecological monitoring means a significant loss of opportunity for research and conservation. Assessment of genetic and phenotypic variation within and between species needs to be fully integrated to safeguard biodiversity and the ecological and evolutionary dynamics in natural ecosystems. We illustrate our case with examples from fishes and conclude with examples of ongoing monitoring programs and provide suggestions on how to improve future quantitative diversity surveys.
Resumo:
El trabajo apunta al estudio de los ecosistemas como entidades complejas y jerárquicas, tanto desde el punto de vista escalar como de su diversidad estructural, en el norte de la provincia de Mendoza, hasta los 34º de latitud sur. Se enfoca el estudio espacial jerárquico desde las diferentes escalas de análisis: micro, meso y macroescala. La macroescala es semejante al nivel de los biomas (macroecosistema); la mesoescala, a los ecosistemas definidos, por la diferenciación geomorfológica, entre otros factores, (mesoecosistema) y la microescala, a los sub-ecosistemas que surgen de las diferenciaciones topográficas y edáficas vinculadas con las formaciones vegetales y su ambiente (microecosistema). Para dicho estudio se toma un factor de control que conduce nuestro camino en el análisis, cual es el clima en sus tres jerarquías. Por otro lado se consideran las diferenciaciones jerárquicas espaciales de los ecosistemas basados en: el clima zonal, las unidades geomorfológicas que modifican el clima zonal y la topografía el suelo con su disponibilidad de agua que modifica el clima local. Los objetivos generales se basan en la identificación y localización de los ecosistemas jerárquicos y su análisis multiescalar integrado. Las hipótesis planteadas afirman que, en las diferentes escalas de ecosistemas, el clima es el denominador común que organiza la distribución de los mismos y además se afirma que existen agentes degradadores en todos los niveles de análisis. Se utiliza el método geográfico. En el análisis, se aplican los métodos deductivo-inductivos vinculando las escalas jerárquicas de los ecosistemas y los estudios de casos. Con este trabajo se pretende profundizar el conocimiento de la complejidad de los ecosistemas mendocinos, con un enfoque original.