976 resultados para integrated processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, which comprises a press fit hafnium insert in a copper body holder, to improve its durability. Based on a deep analysis of both the scientific and patent literature, different solutions were proposed and tested. First, the behaviour of Hf cathodes when operating at high current levels (250A) in oxidizing atmosphere has been experimentally investigated optimizing, with respect to expected service life, the initial shape of the electrode emissive surface. Moreover, the microstructural modifications of the Hf insert in PAC electrodes were experimentally investigated during first cycles, in order to understand those phenomena occurring on and under the Hf emissive surface and involved in the electrode erosion process. Thereafter, the research activity focussed on producing, characterizing and testing prototypes of composite inserts, combining powders of a high thermal conductibility (Cu, Ag) and high thermionic emissivity (Hf, Zr) materials The complexity of the thermal plasma torch environment required and integrated approach also involving physical modelling. Accordingly, a detailed line-by-line method was developed to compute the net emission coefficient of Ar plasmas at temperatures ranging from 3000 K to 25000 K and pressure ranging from 50 kPa to 200 kPa, for optically thin and partially autoabsorbed plasmas. Finally, prototypal electrodes were studied and realized for a newly developed plasma source, based on the plasma needle concept and devoted to the generation of atmospheric pressure non-thermal plasmas for biomedical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research deals with the deepening and use of an environmental accounting matrix in Emilia-Romagna, RAMEA air emissions (regional NAMEA), carried out by the Regional Environment Agency (Arpa) in an European project. After a depiction of the international context regarding the widespread needing to integrate economic indicators and go beyond conventional reporting system, this study explains the structure, update and development of the tool. The overall aim is to outline the matrix for environmental assessments of regional plans, draw up sustainable reports and monitor effects of regional policies in a sustainable development perspective. The work focused on an application of a Shift-Share model, on the integration with eco-taxes, industrial waste production, energy consumptions, on applications of the extended RAMEA as a policy tool, following Eurostat guidelines. The common thread is the eco-efficiency (economic-environmental efficiency) index. The first part, in English, treats the methodology used to build a more complete tool; in the second part RAMEA has been applied on two regional case studies, in Italian, to support decision makers regarding Strategic Environmental Assessments’ processes (2001/42/EC). The aim is to support an evidence-based policy making by integrating sustainable development concerns at all levels. The first case study regards integrated environmental-economic analyses in support to the SEA of the Regional Waste management plan. For the industrial waste production an extended and updated RAMEA has been developed as a useful policy tool, to help in analysing and monitoring the state of environmental-economic performances. The second case study deals with the environmental report for the SEA of the Regional Program concerning productive activities. RAMEA has been applied aiming to an integrated environmental-economic analysis of the context, to investigate the performances of the regional production chains and to depict and monitor the area where the program should be carried out, from an integrated environmental-economic perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years the need for the design of more sustainable processes and the development of alternative reaction routes to reduce the environmental impact of the chemical industry has gained vital importance. Main objectives especially regard the use of renewable raw materials, the exploitation of alternative energy sources, the design of inherently safe processes and of integrated reaction/separation technologies (e.g. microreactors and membranes), the process intensification, the reduction of waste and the development of new catalytic pathways. The present PhD thesis reports results derived during a three years research period at the School of Chemical Sciences of Alma Mater Studiorum-University of Bologna, Dept. of Industrial Chemistry and Materials (now Dept. of Industrial Chemistry “Toso Montanari”), under the supervision of Prof. Fabrizio Cavani (Catalytic Processes Development Group). Three research projects in the field of heterogeneous acid catalysis focused on potential industrial applications were carried out. The main project, regarding the conversion of lignocellulosic materials to produce monosaccharides (important intermediates for production of biofuels and bioplatform molecules) was financed and carried out in collaboration with the Italian oil company eni S.p.A. (Istituto eni Donegani-Research Center for non-Conventional Energies, Novara, Italy) The second and third academic projects dealt with the development of green chemical processes for fine chemicals manufacturing. In particular, (a) the condensation reaction between acetone and ammonia to give triacetoneamine (TAA), and (b) the Friedel-Crafts acylation of phenol with benzoic acid were investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changepoint analysis is a well established area of statistical research, but in the context of spatio-temporal point processes it is as yet relatively unexplored. Some substantial differences with regard to standard changepoint analysis have to be taken into account: firstly, at every time point the datum is an irregular pattern of points; secondly, in real situations issues of spatial dependence between points and temporal dependence within time segments raise. Our motivating example consists of data concerning the monitoring and recovery of radioactive particles from Sandside beach, North of Scotland; there have been two major changes in the equipment used to detect the particles, representing known potential changepoints in the number of retrieved particles. In addition, offshore particle retrieval campaigns are believed may reduce the particle intensity onshore with an unknown temporal lag; in this latter case, the problem concerns multiple unknown changepoints. We therefore propose a Bayesian approach for detecting multiple changepoints in the intensity function of a spatio-temporal point process, allowing for spatial and temporal dependence within segments. We use Log-Gaussian Cox Processes, a very flexible class of models suitable for environmental applications that can be implemented using integrated nested Laplace approximation (INLA), a computationally efficient alternative to Monte Carlo Markov Chain methods for approximating the posterior distribution of the parameters. Once the posterior curve is obtained, we propose a few methods for detecting significant change points. We present a simulation study, which consists in generating spatio-temporal point pattern series under several scenarios; the performance of the methods is assessed in terms of type I and II errors, detected changepoint locations and accuracy of the segment intensity estimates. We finally apply the above methods to the motivating dataset and find good and sensible results about the presence and quality of changes in the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays microalgae are studied, and a number of species already mass-cultivated, for their application in many fields: food and feed, chemicals, pharmaceutical, phytoremediation and renewable energy. Phytoremediation, in particular, can become a valid integrated process in many algae biomass production systems. This thesis is focused on the physiological and biochemical effects of different environmental factors, mainly macronutrients, lights and temperature on microalgae. Microalgal species have been selected on the basis of their potential in biotechnologies, and nitrogen occurs in all chapters due to its importance in physiological and applicative fields. There are 5 chapters, ready or in preparation to be submitted, with different specific matters: (i) to measure the kinetic parameters and the nutrient removal efficiencies for a selected and local strain of microalgae; (ii) to study the biochemical pathways of the microalga D. communis in presence of nitrate and ammonium; (iii) to improve the growth and the removal efficiency of a specific green microalga in mixotrophic conditions; (iv) to optimize the productivity of some microalgae with low growth-rate conditions through phytohormones and other biostimulants; and (v) to apply the phyto-removal of ammonium in an effluent from anaerobic digestion. From the results it is possible to understand how a physiological point of view is necessary to provide and optimize already existing biotechnologies and applications with microalgae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transformation of the 1990s has had a bearing on the academic and scientific world, as is becoming increasingly obvious with the changing numbers of foreign students wishing to study in the Czech Republic and of Czech students wishing to study abroad, the virtual collapse of doctoral studies, and the rapidly increasing age of Czech academics (placed at 48 by official sources and at rather more by this research). At the same time there is an apparent lack of interest in analysing and understanding these trends, which Mr. Cermak terms an ostrich policy, although his research showed that academics are in fact both aware and concerned about them. The mid-1990s migration of talent to and from R+D in the Czech Republic is also reflected in the number of talented Czech students studying abroad, who represent the largest and most interesting group of actual and potential migrants. Mr. Cermak's study took the form of a Delphi enquiry participated in by 44 specialists, including experts in the problems of higher education and science policy from the Presidium of the Higher Education Council (n = 23), members of the Council's Science and Research Commission (n = 14), former and current managers of higher education authorities (n = 4) and selected participants of the longitudinal talent research (n = 3). Questions considered included the influence of continuing talent migration from domestic R+D on the efficiency of domestic higher education, the diversification of forms of the brain drain and their impact on other processes in society, the possibility of positive influence on the brain drain processes to minimise the risks it presents, and the use of the knowledge obtained about the brain drain. The study revealed a clear drop of interest in brain drain problems in higher education in the mid-1990s, which is probably related to the collapsed of Czech R+D in the field of talent education. The effects on this segment of the labour market appeared earlier, with a major migration wave in 1991-1993 which significantly "cleared" the area of scientific talent. In addition, prospective talents from the ranks of younger students have not been integrated into domestic R+D, leading to the increasing average age of those working in this field. "Talent scouting" tended to be oriented towards much younger individuals, even in some cases towards undergraduate students. The R+D institutions deprived of human resources considered as basic in a functional R+D system have lost much of their dynamism and so no longer attract not only domestic talent but also talent from other regions. As a result the public, including the mass media and political structures, have stopped regarding the support of domestic science as a priority. This is clear both among the young people who are important for the future development of R+D (support for the education of talented children has dropped), from the drop in the prestige of this area as a profession among university students, and from the lack of explicit support for R+D by any of the political parties. On the basis of his findings Mr. Cermak concludes that there is no basis for the belief that the brain drain will represent a positive force in stimulating the development of the open society. Migration data shows that the outflow of talent from the Czech Republic far exceeds the inflow, and that the latter is largely short-term. Not only has the number of returning Czech professors dropped to half of its level at the beginning of the 1990s, but they also tend to take up only short-term contracts and retain their foreign positions. Recruitment of scientific talent from other countries, including the Slovak Republic, is limited. Furthermore internal contacts between those already involved in R+D have been badly hit by economic pressures and institutional co-operation has dropped to a minimum. There have been few moves to counteract this situation, the only notable one being the Program 250, launched in 1996 with government support to try and attract younger (i.e. under 40) talent into R+D. Its resources are however limited and its effects have not so far been evaluated. The deficit of academic and scientific talent in the Czech Republic is increasing and two major directions of academic work are emerging. Classic higher education science based on the teaching process is declining, largely due to economic factors, while there is an increasing emphasis on special; ad hoc projects which cannot be related directly to teaching but are often interesting to specialists outside the Czech Republic. This is shown clearly by the increase in publishing and in participation in domestic and foreign grant projects, which often serve to supplement the otherwise low salaries in the higher education sector. This tend was also accelerated by the collapse of applied R+D in individual sectors of the national economy and by substantial cutbacks in the Czech Academy of Sciences, which formerly fostered such research. Some part of the output of this research can be used in the education system and its financial contribution does significantly affect the stability of the present staff, but Mr. Cermak sees it as generally unfavourable for the development of talent education. In addition, it has led to a certain resignation on the question of integration into international structures, due to the emphasis on short-term targets, commercial advantages and individualism rather than team work. At the same time, he admits that these developments reflect those in other areas of the transformation in the Czech Republic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major challenges for a mission to the Jovian system is the radiation tolerance of the spacecraft (S/C) and the payload. Moreover, being able to achieve science observations with high signal to noise ratios (SNR), while passing through the high flux radiation zones, requires additional ingenuity on the part of the instrument provider. Consequently, the radiation mitigation is closely intertwined with the payload, spacecraft and trajectory design, and requires a systems-level approach. This paper presents a design for the Io Volcano Observer (IVO), a Discovery mission concept that makes multiple close encounters with Io while orbiting Jupiter. The mission aims to answer key outstanding questions about Io, especially the nature of its intense active volcanism and the internal processes that drive it. The payload includes narrow-angle and wide-angle cameras (NAC and WAC), dual fluxgate magnetometers (FGM), a thermal mapper (ThM), dual ion and neutral mass spectrometers (INMS), and dual plasma ion analyzers (PIA). The radiation mitigation is implemented by drawing upon experiences from designs and studies for missions such as the Radiation Belt Storm Probes (RBSP) and Jupiter Europa Orbiter (JEO). At the core of the radiation mitigation is IVO's inclined and highly elliptical orbit, which leads to rapid passes through the most intense radiation near Io, minimizing the total ionizing dose (177 krads behind 100 mils of Aluminum with radiation design margin (RDM) of 2 after 7 encounters). The payload and the spacecraft are designed specifically to accommodate the fast flyby velocities (e.g. the spacecraft is radioisotope powered, remaining small and agile without any flexible appendages). The science instruments, which collect the majority of the high-priority data when close to Io and thus near the peak flux, also have to mitigate transient noise in their detectors. The cameras use a combination of shielding and CMOS detectors with extremely fast readout to mi- imize noise. INMS microchannel plate detectors and PIA channel electron multipliers require additional shielding. The FGM is not sensitive to noise induced by energetic particles and the ThM microbolometer detector is nearly insensitive. Detailed SNR calculations are presented. To facilitate targeting agility, all of the spacecraft components are shielded separately since this approach is more mass efficient than using a radiation vault. IVO uses proven radiation-hardened parts (rated at 100 krad behind equivalent shielding of 280 mils of Aluminum with RDM of 2) and is expected to have ample mass margin to increase shielding if needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comments on an article by Kashima et al. (see record 2007-10111-001). In their target article Kashima and colleagues try to show how a connectionist model conceptualization of the self is best suited to capture the self's temporal and socio-culturally contextualized nature. They propose a new model and to support this model, the authors conduct computer simulations of psychological phenomena whose importance for the self has long been clear, even if not formally modeled, such as imitation, and learning of sequence and narrative. As explicated when we advocated connectionist models as a metaphor for self in Mischel and Morf (2003), we fully endorse the utility of such a metaphor, as these models have some of the processing characteristics necessary for capturing key aspects and functions of a dynamic cognitive-affective self-system. As elaborated in that chapter, we see as their principal strength that connectionist models can take account of multiple simultaneous processes without invoking a single central control. All outputs reflect a distributed pattern of activation across a large number of simple processing units, the nature of which depends on (and changes with) the connection weights between the links and the satisfaction of mutual constraints across these links (Rummelhart & McClelland, 1986). This allows a simple account for why certain input features will at times predominate, while others take over on other occasions. (PsycINFO Database Record (c) 2008 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of micro systems, there is an increasing demand for integrable porous materials. In addition to those conventional applications, such as filtration, wicking, and insulating, many new micro devices, including micro reactors, sensors, actuators, and optical components, can benefit from porous materials. Conventional porous materials, such as ceramics and polymers, however, cannot meet the challenges posed by micro systems, due to their incompatibility with standard micro-fabrication processes. In an effort to produce porous materials that can be used in micro systems, porous silicon (PS) generated by anodization of single crystalline silicon has been investigated. In this work, the PS formation process has been extensively studied and characterized as a function of substrate type, crystal orientation, doping concentration, current density and surfactant concentration and type. Anodization conditions have been optimized for producing very thick porous silicon layers with uniform pore size, and for obtaining ideal pore morphologies. Three different types of porous silicon materials: meso porous silicon, macro porous silicon with straight pores, and macro porous silicon with tortuous pores, have been successfully produced. Regular pore arrays with controllable pore size in the range of 2µm to 6µm have been demonstrated as well. Localized PS formation has been achieved by using oxide/nitride/polysilicon stack as masking materials, which can withstand anodization in hydrofluoric acid up to twenty hours. A special etching cell with electrolytic liquid backside contact along with two process flows has been developed to enable the fabrication of thick macro porous silicon membranes with though wafer pores. For device assembly, Si-Au and In-Au bonding technologies have been developed. Very low bonding temperature (~200 degrees C) and thick/soft bonding layers (~6µm) have been achieved by In-Au bondi ng technology, which is able to compensate the potentially rough surface on the porous silicon sample without introducing significant thermal stress. The application of the porous silicon material in micro systems has been demonstrated in a micro gas chromatograph system by two indispensable components: an integrated vapor source and an inlet filter, wherein porous silicon performs the basic functions of porous media: wicking and filtration. By utilizing a macro porous silicon wick, the calibration vapor source was able to produce a uniform and repeatable vapor generation for n-decane with less than a 0.1% variation in 9 hours, and less than a 0.5% variation in rate over 7 days. With engineered porous silicon membranes the inlet filter was able to show a depth filtration with nearly 100% collection efficiency for particles larger than 0.3µm in diameter, a low pressure-drop of 523Pa at 20sccm flow rate, and a filter capacity of 500µg/cm2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magmatic volatiles play a crucial role in volcanism, from magma production at depth to generation of seismic phenomena to control of eruption style. Accordingly, many models of volcano dynamics rely heavily on behavior of such volatiles. Yet measurements of emission rates of volcanic gases have historically been limited, which has restricted model verification to processes on the order of days or longer. UV cameras are a recent advancement in the field of remote sensing of volcanic SO2 emissions. They offer enhanced temporal and spatial resolution over previous measurement techniques, but need development before they can be widely adopted and achieve the promise of integration with other geophysical datasets. Large datasets require a means by which to quickly and efficiently use imagery to calculate emission rates. We present a suite of programs designed to semi-automatically determine emission rates of SO2 from series of UV images. Extraction of high temporal resolution SO2 emission rates via this software facilitates comparison of gas data to geophysical data for the purposes of evaluating models of volcanic activity and has already proven useful at several volcanoes. Integrated UV camera and seismic measurements recorded in January 2009 at Fuego volcano, Guatemala, provide new insight into the system’s shallow conduit processes. High temporal resolution SO2 data reveal patterns of SO2 emission rate relative to explosions and seismic tremor that indicate tremor and degassing share a common source process. Progressive decreases in emission rate appear to represent inhibition of gas loss from magma as a result of rheological stiffening in the upper conduit. Measurements of emission rate from two closely-spaced vents, made possible by the high spatial resolution of the camera, help constrain this model. UV camera measurements at Kilauea volcano, Hawaii, in May of 2010 captured two occurrences of lava filling and draining within the summit vent. Accompanying high lava stands were diminished SO2 emission rates, decreased seismic and infrasonic tremor, minor deflation, and slowed lava lake surface velocity. Incorporation of UV camera data into the multi-parameter dataset gives credence to the likelihood of shallow gas accumulation as the cause of such events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental policy and decision-making are characterized by complex interactions between different actors and sectors. As a rule, a stakeholder analysis is performed to understand those involved, but it has been criticized for lacking quality and consistency. This lack is remedied here by a formal social network analysis that investigates collaborative and multi-level governance settings in a rigorous way. We examine the added value of combining both elements. Our case study examines infrastructure planning in the Swiss water sector. Water supply and wastewater infrastructures are planned far into the future, usually on the basis of projections of past boundary conditions. They affect many actors, including the population, and are expensive. In view of increasing future dynamics and climate change, a more participatory and long-term planning approach is required. Our specific aims are to investigate fragmentation in water infrastructure planning, to understand how actors from different decision levels and sectors are represented, and which interests they follow. We conducted 27 semi-structured interviews with local stakeholders, but also cantonal and national actors. The network analysis confirmed our hypothesis of strong fragmentation: we found little collaboration between the water supply and wastewater sector (confirming horizontal fragmentation), and few ties between local, cantonal, and national actors (confirming vertical fragmentation). Infrastructure planning is clearly dominated by engineers and local authorities. Little importance is placed on longer-term strategic objectives and integrated catchment planning, but this was perceived as more important in a second analysis going beyond typical questions of stakeholder analysis. We conclude that linking a stakeholder analysis, comprising rarely asked questions, with a rigorous social network analysis is very fruitful and generates complementary results. This combination gave us deeper insight into the socio-political-engineering world of water infrastructure planning that is of vital importance to our well-being.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global environmental change includes changes in a wide range of global scale phenomena, which are expected to affect a number of physical processes, as well as the vulnerability of the communities that will experience their impact. Decision-makers are in need of tools that will enable them to assess the loss of such processes under different future scenarios and to design risk reduction strategies. In this paper, a tool is presented that can be used by a range of end-users (e.g. local authorities, decision makers, etc.) for the assessment of the monetary loss from future landslide events, with a particular focus on torrential processes. The toolbox includes three functions: a) enhancement of the post-event damage data collection process, b) assessment of monetary loss of future events and c) continuous updating and improvement of an existing vulnerability curve by adding data of recent events. All functions of the tool are demonstrated through examples of its application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a reliable simulation of the time and space dependent CO2 redistribution between ocean and atmosphere an appropriate time dependent simulation of particle dynamics processes is essential but has not been carried out so far. The major difficulties were the lack of suitable modules for particle dynamics and early diagenesis (in order to close the carbon and nutrient budget) in ocean general circulation models, and the lack of an understanding of biogeochemical processes, such as the partial dissolution of calcareous particles in oversaturated water. The main target of ORFOIS was to fill in this gap in our knowledge and prediction capability infrastructure. This goal has been achieved step by step. At first comprehensive data bases (already existing data) of observations of relevance for the three major types of biogenic particles, organic carbon (POC), calcium carbonate (CaCO3), and biogenic silica (BSi or opal), as well as for refractory particles of terrestrial origin were collated and made publicly available.