881 resultados para Large-scale Analysis
Resumo:
In west-central Texas, USA, abatement efforts for the gray fox (Urocyon cinereoargenteus) rabies epizootic illustrate the difficulties inherent in large-scale management of wildlife disease. The rabies epizootic has been managed through a cooperative oral rabies vaccination program (ORV) since 1996. Millions of edible baits containing a rabies vaccine have been distributed annually in a 16-km to 24-km zone around the perimeter of the epizootic, which encompasses a geographic area >4 x 105 km2. The ORV program successfully halted expansion of the epizootic into metropolitan areas but has not achieved the ultimate goal of eradication. Rabies activity in gray fox continues to occur periodically outside the ORV zone, preventing ORV zone contraction and dissipation of the epizootic. We employed a landscape-genetic approach to assess gray fox population structure and dispersal in the affected area, with the aim of assisting rabies management efforts. No unique genetic clusters or population boundaries were detected. Instead, foxes were weakly structured over the entire region in an isolation by distance pattern. Local subpopulations appeared to be genetically non-independent over distances >30 km, implying that long-distance movements or dispersal may have been common in the region. We concluded that gray foxes in west-central Texas have a high potential for long-distance rabies virus trafficking. Thus, a 16-km to 24-km ORV zone may be too narrow to contain the fox rabies epizootic. Continued expansion of the ORV zone, although costly, may be critical to the long-term goal of eliminating the Texas fox rabies virus variant from the United States.
Resumo:
We present an analysis of observations made with the Arcminute Microkelvin Imager (AMI) and the CanadaFranceHawaii Telescope (CFHT) of six galaxy clusters in a redshift range of 0.160.41. The cluster gas is modelled using the SunyaevZeldovich (SZ) data provided by AMI, while the total mass is modelled using the lensing data from the CFHT. In this paper, we (i) find very good agreement between SZ measurements (assuming large-scale virialization and a gas-fraction prior) and lensing measurements of the total cluster masses out to r200; (ii) perform the first multiple-component weak-lensing analysis of A115; (iii) confirm the unusual separation between the gas and mass components in A1914 and (iv) jointly analyse the SZ and lensing data for the relaxed cluster A611, confirming our use of a simulation-derived masstemperature relation for parametrizing measurements of the SZ effect.
Resumo:
The purpose of this study is to present a position based tetrahedral finite element method of any order to accurately predict the mechanical behavior of solids constituted by functionally graded elastic materials and subjected to large displacements. The application of high-order elements makes it possible to overcome the volumetric and shear locking that appears in usual homogeneous isotropic situations or even in non-homogeneous cases developing small or large displacements. The use of parallel processing to improve the computational efficiency, allows employing high-order elements instead of low-order ones with reduced integration techniques or strain enhancements. The Green-Lagrange strain is adopted and the constitutive relation is the functionally graded Saint Venant-Kirchhoff law. The equilibrium is achieved by the minimum total potential energy principle. Examples of large displacement problems are presented and results confirm the locking free behavior of high-order elements for non-homogeneous materials. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The use of microalgae and cyanobacteria for the production of biofuels and other raw materials is considered a very promising sustainable technology due to the high areal productivity, potential for CO2 fixation and use of non-arable land. The production of oil by microalgae in a large scale plant was studied using emergy analysis. The joint transformity calculated for the base scenario was 1.32E + 5 sej/J, the oil transformity was 3.51E + 5 sej/J, the emergy yield ratio (EYR) was 1.09 and environmental loading ratio was 11.10 and the emergy sustainability index (ESI) was 0.10, highlighting some of the key challenges for the technology such as high energy consumption during harvesting, raw material consumption and high capital and operation costs. Alternatives scenarios and the sensitivity to process improvements were also assessed, helping prioritize further research based on sustainability impact. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background Blastocladiella emersonii is an aquatic fungus of the Chytridiomycete class, which is at the base of the fungal phylogenetic tree. In this sense, some ancestral characteristics of fungi and animals or fungi and plants could have been retained in this aquatic fungus and lost in members of late-diverging fungal species. To identify in B. emersonii sequences associated with these ancestral characteristics two approaches were followed: (1) a large-scale comparative analysis between putative unigene sequences (uniseqs) from B. emersonii and three databases constructed ad hoc with fungal proteins, animal proteins and plant unigenes deposited in Genbank, and (2) a pairwise comparison between B. emersonii full-length cDNA sequences and their putative orthologues in the ascomycete Neurospora crassa and the basidiomycete Ustilago maydis. Results Comparative analyses of B. emersonii uniseqs with fungi, animal and plant databases through the two approaches mentioned above produced 166 B. emersonii sequences, which were identified as putatively absent from other fungi or not previously described. Through these approaches we found: (1) possible orthologues of genes previously identified as specific to animals and/or plants, and (2) genes conserved in fungi, but with a large difference in divergence rate in B. emersonii. Among these sequences, we observed cDNAs encoding enzymes from coenzyme B12-dependent propionyl-CoA pathway, a metabolic route not previously described in fungi, and validated their expression in Northern blots. Conclusion Using two different approaches involving comparative sequence analyses, we could identify sequences from the early-diverging fungus B. emersonii previously considered specific to animals or plants, and highly divergent sequences from the same fungus relative to other fungi.
Resumo:
Abstract Background Intronic and intergenic long noncoding RNAs (lncRNAs) are emerging gene expression regulators. The molecular pathogenesis of renal cell carcinoma (RCC) is still poorly understood, and in particular, limited studies are available for intronic lncRNAs expressed in RCC Methods Microarray experiments were performed with custom-designed arrays enriched with probes for lncRNAs mapping to intronic genomic regions. Samples from 18 primary RCC tumors and 11 nontumor adjacent matched tissues were analyzed. Meta-analyses were performed with microarray expression data from three additional human tissues (normal liver, prostate tumor and kidney nontumor samples), and with large-scale public data for epigenetic regulatory marks and for evolutionarily conserved sequences. Results A signature of 29 intronic lncRNAs differentially expressed between RCC and nontumor samples was obtained (false discovery rate (FDR) <5%). A signature of 26 intronic lncRNAs significantly correlated with the RCC five-year patient survival outcome was identified (FDR <5%, p-value ≤0.01). We identified 4303 intronic antisense lncRNAs expressed in RCC, of which 22% were significantly (p <0.05) cis correlated with the expression of the mRNA in the same locus across RCC and three other human tissues. Gene Ontology (GO) analysis of those loci pointed to 'regulation of biological processes’ as the main enriched category. A module map analysis of the protein-coding genes significantly (p <0.05) trans correlated with the 20% most abundant lncRNAs, identified 51 enriched GO terms (p <0.05). We determined that 60% of the expressed lncRNAs are evolutionarily conserved. At the genomic loci containing the intronic RCC-expressed lncRNAs, a strong association (p <0.001) was found between their transcription start sites and genomic marks such as CpG islands, RNA Pol II binding and histones methylation and acetylation. Conclusion Intronic antisense lncRNAs are widely expressed in RCC tumors. Some of them are significantly altered in RCC in comparison with nontumor samples. The majority of these lncRNAs is evolutionarily conserved and possibly modulated by epigenetic modifications. Our data suggest that these RCC lncRNAs may contribute to the complex network of regulatory RNAs playing a role in renal cell malignant transformation.
Resumo:
We investigated the seasonal patterns of Amazonian forest photosynthetic activity, and the effects thereon of variations in climate and land-use, by integrating data from a network of ground-based eddy flux towers in Brazil established as part of the ‘Large-Scale Biosphere Atmosphere Experiment in Amazonia’ project. We found that degree of water limitation, as indicated by the seasonality of the ratio of sensible to latent heat flux (Bowen ratio) predicts seasonal patterns of photosynthesis. In equatorial Amazonian forests (5◦ N–5◦ S), water limitation is absent, and photosynthetic fluxes (or gross ecosystem productivity, GEP) exhibit high or increasing levels of photosynthetic activity as the dry season progresses, likely a consequence of allocation to growth of new leaves. In contrast, forests along the southern flank of the Amazon, pastures converted from forest, and mixed forest-grass savanna, exhibit dry-season declines in GEP, consistent with increasing degrees of water limitation. Although previous work showed tropical ecosystem evapotranspiration (ET) is driven by incoming radiation, GEP observations reported here surprisingly show no or negative relationships with photosynthetically active radiation (PAR). Instead, GEP fluxes largely followed the phenology of canopy photosynthetic capacity (Pc), with only deviations from this primary pattern driven by variations in PAR. Estimates of leaf flush at three
Resumo:
Type Ia supernovae have been successfully used as standardized candles to study the expansion history of the Universe. In the past few years, these studies led to the exciting result of an accelerated expansion caused by the repelling action of some sort of dark energy. This result has been confirmed by measurements of cosmic microwave background radiation, the large-scale structure, and the dynamics of galaxy clusters. The combination of all these experiments points to a “concordance model” of the Universe with flat large-scale geometry and a dominant component of dark energy. However, there are several points related to supernova measurements which need careful analysis in order to doubtlessly establish the validity of the concordance model. As the amount and quality of data increases, the need of controlling possible systematic effects which may bias the results becomes crucial. Also important is the improvement of our knowledge of the physics of supernovae events to assure and possibly refine their calibration as standardized candle. This thesis addresses some of those issues through the quantitative analysis of supernova spectra. The stress is put on a careful treatment of the data and on the definition of spectral measurement methods. The comparison of measurements for a large set of spectra from nearby supernovae is used to study the homogeneity and to search for spectral parameters which may further refine the calibration of the standardized candle. One such parameter is found to reduce the dispersion in the distance estimation of a sample of supernovae to below 6%, a precision which is comparable with the current lightcurve-based calibration, and is obtained in an independent manner. Finally, the comparison of spectral measurements from nearby and distant objects is used to test the possibility of evolution with cosmic time of the intrinsic brightness of type Ia supernovae.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Wireless sensor networks can transform our buildings in smart environments, improving comfort, energy efficiency and safety. Today however, wireless sensor networks are not considered reliable enough for being deployed on large scale. In this thesis, we study the main failure causes for wireless sensor networks, the existing solutions to improve reliability and investigate the possibility to implement self-diagnosis through power consumption measurements on the sensor nodes. Especially, we focus our interest on faults that generate in-range errors: those are wrong readings but belong to the range of the sensor and can therefore be missed by external observers. Using a wireless sensor network deployed in the R\&D building of NXP at the High Tech Campus of Eindhoven, we performed a power consumption characterization of the Wireless Autonomous Sensor (WAS), and studied through some experiments the effect that faults have in the power consumption of the sensor.
Resumo:
The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.
Resumo:
Abstract In this study structural and finite strain data are used to explore the tectonic evolution and the exhumation history of the Chilean accretionary wedge. The Chilean accretionary wedge is part of a Late Paleozoic subduction complex that developed during subduction of the Pacific plate underneath South America. The wedge is commonly subdivided into a structurally lower Western Series and an upper Eastern Series. This study shows the progressive development of structures and finite strain from the least deformed rocks in the eastern part of the Eastern Series of the accretionary wedge to higher grade schist of the Western Series at the Pacific coast. Furthermore, this study reports finite-strain data to quantify the contribution of vertical ductile shortening to exhumation. Vertical ductile shortening is, together with erosion and normal faulting, a process that can aid the exhumation of high-pressure rocks. In the east, structures are characterized by upright chevron folds of sedimentary layering which are associated with a penetrative axial-plane foliation, S1. As the F1 folds became slightly overturned to the west, S1 was folded about recumbent open F2 folds and an S2 axial-plane foliation developed. Near the contact between the Western and Eastern Series S2 represents a prominent subhorizontal transposition foliation. Towards the structural deepest units in the west the transposition foliation became progressively flat lying. Finite-strain data as obtained by Rf/Phi and PDS analysis in metagreywacke and X-ray texture goniometry in phyllosilicate-rich rocks show a smooth and gradual increase in strain magnitude from east to west. There are no evidences for normal faulting or significant structural breaks across the contact of Eastern and Western Series. The progressive structural and strain evolution between both series can be interpreted to reflect a continuous change in the mode of accretion in the subduction wedge. Before ~320-290 Ma the rocks of the Eastern Series were frontally accreted to the Andean margin. Frontal accretion caused horizontal shortening and upright folds and axial-plane foliations developed. At ~320-290 Ma the mode of accretion changed and the rocks of the Western Series were underplated below the Andean margin. This basal accretion caused a major change in the flow field within the wedge and gave rise to vertical shortening and the development of the penetrative subhorizontal transposition foliation. To estimate the amount that vertical ductile shortening contributed to the exhumation of both units finite strain is measured. The tensor average of absolute finite strain yield Sx=1.24, Sy=0.82 and Sz=0.57 implying an average vertical shortening of ca. 43%, which was compensated by volume loss. The finite strain data of the PDS measurements allow to calculate an average volume loss of 41%. A mass balance approximates that most of the solved material stays in the wedge and is precipitated in quartz veins. The average of relative finite strain is Sx=1.65, Sy=0.89 and Sz=0.59 indicating greater vertical shortening in the structurally deeper units. A simple model which integrates velocity gradients along a vertical flow path with a steady-state wedge is used to estimate the contribution of deformation to ductile thinning of the overburden during exhumation. The results show that vertical ductile shortening contributed 15-20% to exhumation. As no large-scale normal faults have been mapped the remaining 80-85% of exhumation must be due to erosion.
Resumo:
The purpose of this research is to provide empirical evidence on determinants of the economic use of patented inventions in order to contribute to the literature on technology and innovation management. The current work consists of three main parts, each of which constitutes a self-consistent research paper. The first paper uses a meta-analytic approach to review and synthesize the existing body of empirical research on the determinants of technology licensing. The second paper investigates the factors affecting the choice between the following alternative economic uses of patented inventions: pure internal use, pure licensing, and mixed use. Finally, the third paper explores the least studied option of the economic use of patented inventions, namely, the sale of patent rights. The data to empirically test the hypotheses come from a large-scale survey of European Patent inventors resident in 21 European countries, Japan, and US. The findings provided in this dissertation contribute to a better understanding of the economic use of patented inventions by expanding the limits of previous research in several different dimensions.