940 resultados para Temporal constraints analysis
Resumo:
AIM: The main goal of this research was to investigate the influence of the hydrological pulses on the space-temporal dynamics of physical and chemical variables in a wetland adjacent to Jacupiranguinha River (São Paulo, Brazil); METHODS: Eleven sampling points were distributed among the wetland, a tributary by its left side and the adjacent river. Four samplings were carried out, covering the rainy and the dry periods. Measures of pH, dissolved oxygen, electrical conductivity and redox potential were taken in regular intervals of the water column using a multiparametric probe. Water samples were collected for the nitrogen and total phosphorus analysis, as well as their dissolved fractions (dissolved inorganic phosphorus, total dissolved phosphorus, ammoniacal nitrogen and nitrate). Total alkalinity and suspended solids were also quantified; RESULTS: The Multivariate Analysis of Variance showed the influence of the seasonality on the variability of the investigated variables, while the Principal Component Analysis gave rise in two statistical significant axes, which delimited two groups representative of the rainy and dry periods. Hydrological pulses from Jacupiranguinha River, besides contributing to the inputs of nutrients and sediments during the period of connectivity, accounted for the decrease in spatial gradients in the wetland. This "homogenization effect" was evidenced by the Cluster Analysis. The research also showed an industrial raw effluent as the main point source of phosphorus to the Jacupiranguinha River and, indirectly, to the wetland; CONCLUSIONS: Therefore, considering the scarcity of information about the wetlands in the study area, this research, besides contributing to the understanding of the influence of hydrological pulses on the investigated environmental variables, showed the need for adoption of conservation policies of these ecosystems face the increase anthropic pressures that they have been submitted, which may result in lack of their ecological, social and economic functions.
Resumo:
Sinking particles through the pelagic ocean have been traditionally considered the most important vehicle by which the biological pump sequesters carbon in the ocean interior. Nevertheless, regional scale variability in particle flux is a major outstanding issue in oceanography. 5 Here, we have studied the regional and temporal variability of total particulate organic matter fluxes, as well as chloropigment and total hydrolyzed amino acid (THAA) compositions and fluxes in the Canary Current region, between 20–30 N, during two contrasting periods: August 2006, characterized by warm and stratified waters, but also intense winds which enhanced eddy development south of the Canary Islands, 10 and February 2007, characterized by colder waters, less stratification and higher productivity. We found that the eddy-field generated south of the Canary Islands enhanced by >2 times particulate organic carbon (POC) export with respect to stations (FF; farfield) outside the eddy-field influence. We also observed flux increases of one order of magnitude in chloropigment and 70% in THAA in the eddy-field relative to FF stations. 15 Principal Components Analysis (PCA) was performed to assess changes in particulate organic matter composition between stations. At eddy-field stations, higher chlorophyll enrichment reflected “fresher” material, while at FF stations a higher proportion of pheophytin indicated greater degradation due to microbes and microzooplankton. PCA also suggests that phytoplankton community structure, particularly the dominance of 20 diatoms versus carbonate-rich plankton, is the major factor influencing the POC export within the eddy field. In February, POC export fluxes were the highest ever reported for this area, reaching values of 15 mmolCm−2 d−1 at 200m depth. Compositional changes in pigments and THAA indicate that the source of sinking particles varies zonally and meridionally and suggest that sinking particles were more degraded at 25 near-coastal stations relative to open ocean stations.
Resumo:
[EN] Sinking particles through the pelagic ocean have been traditionally considered the most important vehicle by which the biological pump sequesters carbon in the ocean interior. Nevertheless, regional scale variability in particle flux is a major outstanding issue in oceanography. Here, we have studied the regional and temporal variability of total particulate organic matter fluxes, as well as chloropigment and total hydrolyzed amino acid (THAA) compositions and fluxes in the Canary Current region, between 20?30_ N, during two contrasting periods: August 2006, characterized by warm and stratified waters, but also intense winds which enhanced eddy development south of the Canary Islands, and February 2007, characterized by colder waters, less stratification and higher productivity. We found that the eddyfield generated south of the Canary Islands enhanced by >2 times particulate organic carbon (POC) export with respect to stations (FF; far-field) outside the eddy-field influence. We also observed flux increases of one order of magnitude in chloropigment and 2 times in THAA in the eddy-field relative to FF stations. Principal Components Analysis (PCA) was performed to assess changes in particulate organic matter composition between stations. At eddy-field stations, higher chlorophyll enrichment reflected ?fresher? material, while at FF stations a higher proportion of pheophytin indicated greater degradation due to microbes and microzooplankton. PCA also suggests that phytoplankton community structure, particularly the dominance of diatoms versus carbonate-rich plankton, is the major factor influencing the POC export within the eddy field. In February, POC export POC export within the eddy field. In February, POC export fluxes were the highest ever reported for this area, reaching values of _15 mmolCm?2 d?1 at 200m depth. Compositional changes in pigments and THAA indicate that the source of sinking particles varies zonally and meridionally and suggest that sinking particles were more degraded at near-coastal stations relative to open ocean stations.
Resumo:
Trabajo realizado por: Garijo, J. C., Hernández León, S.
Resumo:
Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.
Resumo:
It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.
Resumo:
In the thesis I exploit an empirical analysis on firm's productivity. I relate the efficiency at plant level with the input market features and I suggest an estimation technique for production function that takes into account firm's liquidity constraints. The main results are three. When I consider services as inputs for manufacturing firm's production process, I find that more competition in service sector affects positively plants productivity and export decision. Secondly liquidity constraints are important for the calculation of firm's productivity because they are a second source of firm's heterogeneity. Third liquidity constraints are important for firm's internationalization
Resumo:
In this work a multidisciplinary study of the December 26th, 2004 Sumatra earthquake has been carried out. We have investigated both the effect of the earthquake on the Earth rotation and the stress field variations associated with the seismic event. In the first part of the work we have quantified the effects of a water mass redistribution associated with the propagation of a tsunami wave on the Earth’s pole path and on the length-of-day (LOD) and applied our modeling results to the tsunami following the 2004 giant Sumatra earthquake. We compared the result of our simulations on the instantaneous rotational axis variations with some preliminary instrumental evidences on the pole path perturbation (which has not been confirmed yet) registered just after the occurrence of the earthquake, which showed a step-like discontinuity that cannot be attributed to the effect of a seismic dislocation. Our results show that the perturbation induced by the tsunami on the instantaneous rotational pole is characterized by a step-like discontinuity, which is compatible with the observations but its magnitude turns out to be almost one hundred times smaller than the detected one. The LOD variation induced by the water mass redistribution turns out to be not significant because the total effect is smaller than current measurements uncertainties. In the second part of this work of thesis we modeled the coseismic and postseismic stress evolution following the Sumatra earthquake. By means of a semi-analytical, viscoelastic, spherical model of global postseismic deformation and a numerical finite-element approach, we performed an analysis of the stress diffusion following the earthquake in the near and far field of the mainshock source. We evaluated the stress changes due to the Sumatra earthquake by projecting the Coulomb stress over the sequence of aftershocks taken from various catalogues in a time window spanning about two years and finally analyzed the spatio-temporal pattern. The analysis performed with the semi-analytical and the finite-element modeling gives a complex picture of the stress diffusion, in the area under study, after the Sumatra earthquake. We believe that the results obtained with the analytical method suffer heavily for the restrictions imposed, on the hypocentral depths of the aftershocks, in order to obtain the convergence of the harmonic series of the stress components. On the contrary we imposed no constraints on the numerical method so we expect that the results obtained give a more realistic description of the stress variations pattern.
Resumo:
Singularities of robot manipulators have been intensely studied in the last decades by researchers of many fields. Serial singularities produce some local loss of dexterity of the manipulator, therefore it might be desirable to search for singularityfree trajectories in the jointspace. On the other hand, parallel singularities are very dangerous for parallel manipulators, for they may provoke the local loss of platform control, and jeopardize the structural integrity of links or actuators. It is therefore utterly important to avoid parallel singularities, while operating a parallel machine. Furthermore, there might be some configurations of a parallel manipulators that are allowed by the constraints, but nevertheless are unreachable by any feasible path. The present work proposes a numerical procedure based upon Morse theory, an important branch of differential topology. Such procedure counts and identify the singularity-free regions that are cut by the singularity locus out of the configuration space, and the disjoint regions composing the configuration space of a parallel manipulator. Moreover, given any two configurations of a manipulator, a feasible or a singularity-free path connecting them can always be found, or it can be proved that none exists. Examples of applications to 3R and 6R serial manipulators, to 3UPS and 3UPU parallel wrists, to 3UPU parallel translational manipulators, and to 3RRR planar manipulators are reported in the work.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
ZusammenfassungDie Bildung von mittelozeanischen Rückenbasalten (MORB) ist einer der wichtigsten Stoffflüsse der Erde. Jährlich wird entlang der 75.000 km langen mittelozeanischen Rücken mehr als 20 km3 neue magmatische Kruste gebildet, das sind etwa 90 Prozent der globalen Magmenproduktion. Obwohl ozeanische Rücken und MORB zu den am meisten untersuchten geologischen Themenbereichen gehören, existieren weiterhin einige Streit-fragen. Zu den wichtigsten zählt die Rolle von geodynamischen Rahmenbedingungen, wie etwa Divergenzrate oder die Nähe zu Hotspots oder Transformstörungen, sowie der absolute Aufschmelzgrad, oder die Tiefe, in der die Aufschmelzung unter den Rücken beginnt. Diese Dissertation widmet sich diesen Themen auf der Basis von Haupt- und Spurenelementzusammensetzungen in Mineralen ozeanischer Mantelgesteine.Geochemische Charakteristika von MORB deuten darauf hin, dass der ozeanische Mantel im Stabilitätsfeld von Granatperidotit zu schmelzen beginnt. Neuere Experimente zeigen jedoch, dass die schweren Seltenerdelemente (SEE) kompatibel im Klinopyroxen (Cpx) sind. Aufgrund dieser granatähnlichen Eigenschaft von Cpx wird Granat nicht mehr zur Erklärung der MORB Daten benötigt, wodurch sich der Beginn der Aufschmelzung zu geringeren Drucken verschiebt. Aus diesem Grund ist es wichtig zu überprüfen, ob diese Hypothese mit Daten von abyssalen Peridotiten in Einklang zu bringen ist. Diese am Ozeanboden aufgeschlossenen Mantelfragmente stellen die Residuen des Aufschmelz-prozesses dar, und ihr Mineralchemismus enthält Information über die Bildungs-bedingungen der Magmen. Haupt- und Spurenelementzusammensetzungen von Peridotit-proben des Zentralindischen Rückens (CIR) wurden mit Mikrosonde und Ionensonde bestimmt, und mit veröffentlichten Daten verglichen. Cpx der CIR Peridotite weisen niedrige Verhältnisse von mittleren zu schweren SEE und hohe absolute Konzentrationen der schweren SEE auf. Aufschmelzmodelle eines Spinellperidotits unter Anwendung von üblichen, inkompatiblen Verteilungskoeffizienten (Kd's) können die gemessenen Fraktionierungen von mittleren zu schweren SEE nicht reproduzieren. Die Anwendung der neuen Kd's, die kompatibles Verhalten der schweren SEE im Cpx vorhersagen, ergibt zwar bessere Resultate, kann jedoch nicht die am stärksten fraktionierten Proben erklären. Darüber hinaus werden sehr hohe Aufschmelzgrade benötigt, was nicht mit Hauptelementdaten in Einklang zu bringen ist. Niedrige (~3-5%) Aufschmelzgrade im Stabilitätsfeld von Granatperidotit, gefolgt von weiterer Aufschmelzung von Spinellperidotit kann jedoch die Beobachtungen weitgehend erklären. Aus diesem Grund muss Granat weiterhin als wichtige Phase bei der Genese von MORB betrachtet werden (Kapitel 1).Eine weitere Hürde zum quantitativen Verständnis von Aufschmelzprozessen unter mittelozeanischen Rücken ist die fehlende Korrelation zwischen Haupt- und Spuren-elementen in residuellen abyssalen Peridotiten. Das Cr/(Cr+Al) Verhältnis (Cr#) in Spinell wird im Allgemeinen als guter qualitativer Indikator für den Aufschmelzgrad betrachtet. Die Mineralchemie der CIR Peridotite und publizierte Daten von anderen abyssalen Peridotiten zeigen, dass die schweren SEE sehr gut (r2 ~ 0.9) mit Cr# der koexistierenden Spinelle korreliert. Die Auswertung dieser Korrelation ergibt einen quantitativen Aufschmelz-indikator für Residuen, welcher auf dem Spinellchemismus basiert. Damit kann der Schmelzgrad als Funktion von Cr# in Spinell ausgedrückt werden: F = 0.10×ln(Cr#) + 0.24 (Hellebrand et al., Nature, in review; Kapitel 2). Die Anwendung dieses Indikators auf Mantelproben, für die keine Ionensondendaten verfügbar sind, ermöglicht es, geochemische und geophysikalischen Daten zu verbinden. Aus geodynamischer Perspektive ist der Gakkel Rücken im Arktischen Ozean von großer Bedeutung für das Verständnis von Aufschmelzprozessen, da er weltweit die niedrigste Divergenzrate aufweist und große Transformstörungen fehlen. Publizierte Basaltdaten deuten auf einen extrem niedrigen Aufschmelzgrad hin, was mit globalen Korrelationen im Einklang steht. Stark alterierte Mantelperidotite einer Lokalität entlang des kaum beprobten Gakkel Rückens wurden deshalb auf Primärminerale untersucht. Nur in einer Probe sind oxidierte Spinellpseudomorphosen mit Spuren primärer Spinelle erhalten geblieben. Ihre Cr# ist signifikant höher als die einiger Peridotite von schneller divergierenden Rücken und ihr Schmelzgrad ist damit höher als aufgrund der Basaltzusammensetzungen vermutet. Der unter Anwendung des oben erwähnten Indikators ermittelte Schmelzgrad ermöglicht die Berechnung der Krustenmächtigkeit am Gakkel Rücken. Diese ist wesentlich größer als die aus Schweredaten ermittelte Mächtigkeit, oder die aus der globalen Korrelation zwischen Divergenzrate und mittels Seismik erhaltene Krustendicke. Dieses unerwartete Ergebnis kann möglicherweise auf kompositionelle Heterogenitäten bei niedrigen Schmelzgraden, oder auf eine insgesamt größere Verarmung des Mantels unter dem Gakkel Rücken zurückgeführt werden (Hellebrand et al., Chem.Geol., in review; Kapitel 3).Zusätzliche Informationen zur Modellierung und Analytik sind im Anhang A-C aufgeführt
Resumo:
The spatio-temporal variations in diversity and abundance of deep-sea macrofaunal assemblages (excluding meiofaunal taxa, as Nematoda, Copepoda and Ostracoda) from the Blanes Canyon (BC) and adjacent open slope are described. The Catalan Sea basin is characterized by the presence of numerous submarine canyons, which are globally acknowledged as biodiversity hot-spots, due to their disturbance regime and incremented conveying of organic matter. This area is subjected to local deep-sea fisheries activities, and to recurrent cold water cascading events from the shelf. The upper canyon (~900 m), middle slope (~1200 m) and lower slope (~1500 m) habitats were investigated during three different months (October 2008, May 2009 and September 2009). A total of 624 specimens belonging to 16 different taxa were found into 67 analyzed samples, which had been collected from the two study areas. Of these, Polychaeta, Mollusca and Crustacea were always the most abundant groups. As expected, the patterns of species diversity and evenness were different in time and space. Both in BC and open slope, taxa diversity and abundance are higher in the shallowest depth and lowest at -1500 m depth. This is probably due to different trophic regimes at these depths. The abundance of filter-feeders is higher inside BC than in the adjacent open slope, which is also related with an increment of predator polychaetes. Surface deposit-feeders are more abundant in the open slope than in BC, along with a decrement of filter-feeders and their predators. Probably these differences are due to higher quantities of suspended organic matter reaching the canyon. The multivariate analyses conducted on major taxa point out major differences effective taxa richness between depths and stations. In September 2009 the analyzed communities double their abundances, with a corresponding increase in richness of taxa. This could be related to a mobilizing event, like the release of accumulated food-supply in a nepheloid layer associated to the arrival of autumn. The highest abundance in BC is detected in the shallowest depth and in late summer (September), probably due to higher food availability caused by stronger flood events coming from Tordera River. The effects of such events seemed to involve adjacent open slope too. The nMDS conducted on major taxa abundance shows a slight temporal difference between the three campaigns samples, with a clear clustering between samples of Sept 09. All depth and all months were dominated by Polychaeta, which have been identified to family level and submitted to further analysis. Family richness have clearly minimum at the -1200 m depth of BC, highlighting the presence of a general impact affecting the populations in the middle slope. Three different matrices have been created, each with a different taxonomic level (All Taxa “AT”, Phylum Level “PL” and Polychaeta Families “PF”). Multivariate analysis (MDS, SIMPER) conducted on PL matrix showed a clear spatial differences between stations (BC and open slope) and depths. MDSs conducted on other two matrices (AT and PF) showed similar patterns, but different from PL analysis. A 2 nd stage analysis have been conducted to understand differences between different taxonomic levels, and PL level has been chosen as the most representative of variation. The faunal differences observed were explained by depth, station and season. All work has been accomplished in the Centre d’estudis avançats de Blanes (CEAB-CSIC), within the framework of Spanish PROMETEO project "Estudio Integrado de Cañones y Taludes PROfundos del MEdiTErráneo Occidental: un hábitat esencial", Ref. CTM2007-66316-C02- 01/MAR.
Resumo:
Sigma (σ) receptors are well established as a non-opioid, non-phencyclidine, and haloperidol-sensitive receptor family with its own binding profile and a characteristic distribution in the central nervous system (CNS) as well as in endocrine, immune, and some peripheral tissues. Two σ receptors subtypes, termed σ1 and σ2, have been pharmacologically characterized, but, to date, only the σ1 has also been cloned. Activation of σ1 receptors alter several neurotransmitter systems and dopamine (DA) neurotrasmission has been often shown to constitute an important target of σ receptors in different experimental models; however the exact role of σ1 receptor in dopaminergic neurotransmission remains unclear. The DA transporter (DAT) modulates the spatial and temporal aspects of dopaminergic synaptic transmission and interprer the primary mechanism by wich dopaminergic neurons terminate the signal transmission. For this reason present studies have been focused in understanding whether, in cell models, the human subtype of σ1 (hσ1) receptor is able to directly modulate the human DA transporter (hDAT). In the first part of this thesis, HEK-293 and SH-SY5Y cells were permanently transfected with the hσ1 receptor. Subsequently, they were transfected with another plasmid for transiently expressing the hDAT. The hDAT activity was estimated using the described [3H]DA uptake assay and the effects of σ ligands were evaluated by measuring the uptaken [3H]DA after treating the cells with known σ agonists and antagonists. Results illustrated in this thesis demonstrate that activation of overexpressed hσ1 receptors by (+)-pentazocine, the σ1 agonist prototype, determines an increase of 40% of the extracellular [3H]DA uptake, in comparison to non-treated controls and the σ1 antagonists BD-1047 and NE-100 prevent the positive effect of (+)-pentazocine on DA reuptake DA is likely to be considered a neurotoxic molecule. In fact, when levels of intracellular DA abnormally invrease, vescicles can’t sequester the DA which is metabolized by MAO (A and B) and COMT with consequent overproduction of oxygen reactive species and toxic catabolites. Stress induced by these molecules leads cells to death. Thus, for the second part of this thesis, experiments have been performed in order to investigate functional alterations caused by the (+)-pentazocine-mediated increase of DA uptake; particularly it has been investigated if the increase of intracellular [DA] could affect cells viability. Results obtained from this study demonstrate that (+)-pentazocine alone increases DA cell toxicity in a concentration-dependent manner only in cells co-expressing hσ1 and hDAT and σ1 antagonists are able to revert the (+)-pentazocine-induced increase of cell susceptibility to DA toxicity. In the last part of this thesis, the functional cross-talking between hσ1 receptor and hDAT has been further investigated using confocal microscopy. From the acquired data it could be suggested that, following exposure to (+)-pentazocine, the hσ1 receptors massively translocate towards the plasma membrane and colocalize with the hDATs. However, any physical interaction between the two proteins remains to be proved. In conclusion, the presented study shows for the first time that, in cell models, hσ1 receptors directly modulate the hDAT activity. Facilitation of DA uptake induced by (+)-pentazocine is reflected on the increased cell susceptibility to DA toxicity; these effects are prevented by σ1 selective antagonists. Since numerous compounds, including several drugs of abuse, bind to σ1 receptors and activating them could facilitate the damage of dopaminergic neurons, the reported protective effect showed by σ1 antagonists would represent the pharmacological basis to test these compounds in experimental models of dopaminergic neurodegenerative diseases (i.e. Parkinson’s Disease).
Resumo:
A main objective of the human movement analysis is the quantitative description of joint kinematics and kinetics. This information may have great possibility to address clinical problems both in orthopaedics and motor rehabilitation. Previous studies have shown that the assessment of kinematics and kinetics from stereophotogrammetric data necessitates a setup phase, special equipment and expertise to operate. Besides, this procedure may cause feeling of uneasiness on the subjects and may hinder with their walking. The general aim of this thesis is the implementation and evaluation of new 2D markerless techniques, in order to contribute to the development of an alternative technique to the traditional stereophotogrammetric techniques. At first, the focus of the study has been the estimation of the ankle-foot complex kinematics during stance phase of the gait. Two particular cases were considered: subjects barefoot and subjects wearing ankle socks. The use of socks was investigated in view of the development of the hybrid method proposed in this work. Different algorithms were analyzed, evaluated and implemented in order to have a 2D markerless solution to estimate the kinematics for both cases. The validation of the proposed technique was done with a traditional stereophotogrammetric system. The implementation of the technique leads towards an easy to configure (and more comfortable for the subject) alternative to the traditional stereophotogrammetric system. Then, the abovementioned technique has been improved so that the measurement of knee flexion/extension could be done with a 2D markerless technique. The main changes on the implementation were on occlusion handling and background segmentation. With the additional constraints, the proposed technique was applied to the estimation of knee flexion/extension and compared with a traditional stereophotogrammetric system. Results showed that the knee flexion/extension estimation from traditional stereophotogrammetric system and the proposed markerless system were highly comparable, making the latter a potential alternative for clinical use. A contribution has also been given in the estimation of lower limb kinematics of the children with cerebral palsy (CP). For this purpose, a hybrid technique, which uses high-cut underwear and ankle socks as “segmental markers” in combination with a markerless methodology, was proposed. The proposed hybrid technique is different than the abovementioned markerless technique in terms of the algorithm chosen. Results showed that the proposed hybrid technique can become a simple and low-cost alternative to the traditional stereophotogrammetric systems.