918 resultados para Optimal Sampling Time
Resumo:
This collection contains measurements on physical soil properties of the plots of the different sub-experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing
Resumo:
Traditionally, many small-sized copepod species are considered to be widespread, bipolar or cosmopolitan. However, these large-scale distribution patterns need to be re-examined in view of increasing evidence of cryptic and pseudo-cryptic speciation in pelagic copepods. Here, we present a phylogeographic study of Oithona similis s.l. populations from the Arctic Ocean, the Southern Ocean and its northern boundaries, the North Atlantic and the Mediterrranean Sea. O. similis s.l. is considered as one of the most abundant species in temperate to polar oceans and acts as an important link in the trophic network between the microbial loop and higher trophic levels such as fish larvae. Two gene fragments were analysed: the mitochondrial cytochrome oxidase c subunit I (COI), and the nuclear ribosomal 28S genetic marker. Seven distinct, geographically delimitated, mitochondrial lineages could be identified, with divergences among the lineages ranging from 8 to 24 %, thus representing most likely cryptic or pseudocryptic species within O. similis s.l. Four lineages were identified within or close to the borders of the Southern Ocean, one lineage in the Arctic Ocean and two lineages in the temperate Northern hemisphere. Surprisingly the Arctic lineage was more closely related to lineages from the Southern hemisphere than to the other lineages from the Northern hemisphere, suggesting that geographic proximity is a rather poor predictor of how closely related the clades are on a genetic level. Molecular clock application revealed that the evolutionary history of O. similis s.l. is possibly closely associated with the reorganization of the ocean circulation in the mid Miocene and may be an example of allopatric speciation in the pelagic zone.
Resumo:
This collection contains measurements of environmental conditions measured on the plots of the different sub-experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. The following series of datasets are contained in this collection: 1.Soil temperature measurements on plots of the Main Experiment; 2. Quantification of the duration that individual plots of the Main Experiment were submerged during a flooding event occurring in June 2013
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
The feasibility of monitoring fluid flow subsurface processes that result in density changes, using the iGrav superconducting gravimeter, is investigated. Practical targets include steam-assisted gravity drainage (SAGD) bitumen depletion and water pumping from aquifers, for which there is currently a void in low-impact, inexpensive monitoring techniques. This study demonstrates that the iGrav has the potential to be applied to multi-scale and diverse reservoirs. Gravity and gravity gradient signals are forward modeled for a real SAGD reservoir at two time steps, and for surface-fed and groundwater-fed aquifer pumping models, to estimate signal strength and directional dependency of water flow. Time-lapse gravimetry on small-scale reservoirs exhibits two obstacles, namely, a µgal sensitivity requirement and high noise levels in the vicinity of the reservoir. In this study, both limitations are overcome by proposing (i) a portable superconducting gravimeter, and (ii) a pair of instruments under various baseline geometries. This results in improved spatial resolution for locating depletion zones, as well as the cancellation of noise common in both instruments. Results indicate that a pair of iGrav superconducting gravimeters meet the sensitivity requirements and the spatial focusing desired to monitor SAGD bitumen migration at the reservoir scales. For SAGD reservoirs, the well pair separation, reservoir depth, and survey sampling determine the resolvability of individual well pair depletion patterns during the steam chamber rising phase, and general reservoir depletion patterns during the steam chamber spreading phase. Results show that monitoring water table elevation changes due to pumping and tracking whether groundwater or surface water is being extracted are feasible.
Resumo:
Complex network theory is a framework increasingly used in the study of air transport networks, thanks to its ability to describe the structures created by networks of flights, and their influence in dynamical processes such as delay propagation. While many works consider only a fraction of the network, created by major airports or airlines, for example, it is not clear if and how such sampling process bias the observed structures and processes. In this contribution, we tackle this problem by studying how some observed topological metrics depend on the way the network is reconstructed, i.e. on the rules used to sample nodes and connections. Both structural and simple dynamical properties are considered, for eight major air networks and different source datasets. Results indicate that using a subset of airports strongly distorts our perception of the network, even when just small ones are discarded; at the same time, considering a subset of airlines yields a better and more stable representation. This allows us to provide some general guidelines on the way airports and connections should be sampled.
Resumo:
In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.
Resumo:
Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.
Resumo:
Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Creative ways of utilising renewable energy sources in electricity generation especially in remote areas and particularly in countries depending on imported energy, while increasing energy security and reducing cost of such isolated off-grid systems, is becoming an urgently needed necessity for the effective strategic planning of Energy Systems. The aim of this research project was to design and implement a new decision support framework for the optimal design of hybrid micro grids considering different types of different technologies, where the design objective is to minimize the total cost of the hybrid micro grid while at the same time satisfying the required electric demand. Results of a comprehensive literature review, of existing analytical, decision support tools and literature on HPS, has identified the gaps and the necessary conceptual parts of an analytical decision support framework. As a result this research proposes and reports an Iterative Analytical Design Framework (IADF) and its implementation for the optimal design of an Off-grid renewable energy based hybrid smart micro-grid (OGREH-SμG) with intra and inter-grid (μG2μG & μG2G) synchronization capabilities and a novel storage technique. The modelling design and simulations were based on simulations conducted using HOMER Energy and MatLab/SIMULINK, Energy Planning and Design software platforms. The design, experimental proof of concept, verification and simulation of a new storage concept incorporating Hydrogen Peroxide (H2O2) fuel cell is also reported. The implementation of the smart components consisting Raspberry Pi that is devised and programmed for the semi-smart energy management framework (a novel control strategy, including synchronization capabilities) of the OGREH-SμG are also detailed and reported. The hybrid μG was designed and implemented as a case study for the Bayir/Jordan area. This research has provided an alternative decision support tool to solve Renewable Energy Integration for the optimal number, type and size of components to configure the hybrid μG. In addition this research has formulated and reported a linear cost function to mathematically verify computer based simulations and fine tune the solutions in the iterative framework and concluded that such solutions converge to a correct optimal approximation when considering the properties of the problem. As a result of this investigation it has been demonstrated that, the implemented and reported OGREH-SμG design incorporates wind and sun powered generation complemented with batteries, two fuel cell units and a diesel generator is a unique approach to Utilizing indigenous renewable energy with a capability of being able to synchronize with other μ-grids is the most effective and optimal way of electrifying developing countries with fewer resources in a sustainable way, with minimum impact on the environment while also achieving reductions in GHG. The dissertation concludes with suggested extensions to this work in the future.
Resumo:
In this dissertation, there are developed different analytical strategies to discover and characterize mammalian brain peptides using small amount of tissues. The magnocellular neurons of rat supraoptic nucleus in tissue and cell culture served as the main model to study neuropeptides, in addition to hippocampal neurons and mouse embryonic pituitaries. The neuropeptidomcis studies described here use different extraction methods on tissue or cell culture combined with mass spectrometry (MS) techniques, matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). These strategies lead to the identification of multiple peptides from the rat/mouse brain in tissue and cell cultures, including novel compounds One of the goals in this dissertation was to optimize sample preparations on samples isolated from well-defined brain regions for mass spectrometric analysis. Here, the neuropeptidomics study of the SON resulted in the identification of 85 peptides, including 20 unique peptides from known prohormones. This study includes mass spectrometric analysis even from individually isolated magnocellular neuroendocrine cells, where vasopressin and several other peptides are detected. At the same time, it was shown that the same approach could be applied to analyze peptides isolated from a similar hypothalamic region, the suprachiasmatic nucleus (SCN). Although there were some overlaps regarding the detection of the peptides in the two brain nuclei, different peptides were detected specific to each nucleus. Among other peptides, provasopressin fragments were specifically detected in the SON while angiotensin I, somatostatin-14, neurokinin B, galanin, and vasoactive-intestinal peptide (VIP) were detected in the SCN only. Lists of peptides were generated from both brain regions for comparison of the peptidome of SON and SCN nuclei. Moving from analysis of magnocellular neurons in tissue to cell culture, the direct peptidomics of the magnocellular and hippocampal neurons led to the detection of 10 peaks that were assigned to previously characterized peptides and 17 peaks that remain unassigned. Peptides from the vasopressin prohormone and secretogranin-2 are attributed to magnocellular neurons, whereas neurokinin A, peptide J, and neurokinin B are attributed to cultured hippocampal neurons. This approach enabled the elucidation of cell-specific prohormone processing and the discovery of cell-cell signaling peptides. The peptides with roles in the development of the pituitary were analyzed using transgenic mice. Hes1 KO is a genetically modified mouse that lives only e18.5 (embryonic days). Anterior pituitaries of Hes1 null mice exhibit hypoplasia due to increased cell death and reduced proliferation and in the intermediate lobe, the cells differentiate abnormally into somatotropes instead of melanotropes. These previous findings demonstrate that Hes1 has multiple roles in pituitary development, cell differentiation, and cell fate. AVP was detected in all samples. Interestingly, somatostatin [92-100] and provasopressin [151-168] were detected in the mutant but not in the wild type or heterozygous pituitaries while somatostatin-14 was detected only in the heterozygous pituitary. In addition, the putative peptide corresponding to m/z 1330.2 and POMC [205-222] are detected in the mutant and heterozygous pituitaries, but not in the wild type. These results indicate that Hes1 influences the processing of different prohormones having possible roles during development and opens new directions for further developmental studies. This research demonstrates the robust capabilities of MS, which ensures the unbiased direct analysis of peptides extracted from complex biological systems and allows addressing important questions to understand cell-cell signaling in the brain.
Resumo:
The occurrence frequency of failure events serve as critical indexes representing the safety status of dam-reservoir systems. Although overtopping is the most common failure mode with significant consequences, this type of event, in most cases, has a small probability. Estimation of such rare event risks for dam-reservoir systems with crude Monte Carlo (CMC) simulation techniques requires a prohibitively large number of trials, where significant computational resources are required to reach the satisfied estimation results. Otherwise, estimation of the disturbances would not be accurate enough. In order to reduce the computation expenses and improve the risk estimation efficiency, an importance sampling (IS) based simulation approach is proposed in this dissertation to address the overtopping risks of dam-reservoir systems. Deliverables of this study mainly include the following five aspects: 1) the reservoir inflow hydrograph model; 2) the dam-reservoir system operation model; 3) the CMC simulation framework; 4) the IS-based Monte Carlo (ISMC) simulation framework; and 5) the overtopping risk estimation comparison of both CMC and ISMC simulation. In a broader sense, this study meets the following three expectations: 1) to address the natural stochastic characteristics of the dam-reservoir system, such as the reservoir inflow rate; 2) to build up the fundamental CMC and ISMC simulation frameworks of the dam-reservoir system in order to estimate the overtopping risks; and 3) to compare the simulation results and the computational performance in order to demonstrate the ISMC simulation advantages. The estimation results of overtopping probability could be used to guide the future dam safety investigations and studies, and to supplement the conventional analyses in decision making on the dam-reservoir system improvements. At the same time, the proposed methodology of ISMC simulation is reasonably robust and proved to improve the overtopping risk estimation. The more accurate estimation, the smaller variance, and the reduced CPU time, expand the application of Monte Carlo (MC) technique on evaluating rare event risks for infrastructures.
Resumo:
Passive sampling devices (PS) are widely used for pollutant monitoring in water, but estimation of measurement uncertainties by PS has seldom been undertaken. The aim of this work was to identify key parameters governing PS measurements of metals and their dispersion. We report the results of an in situ intercomparison exercise on diffusive gradient in thin films (DGT) in surface waters. Interlaboratory uncertainties of time-weighted average (TWA) concentrations were satisfactory (from 28% to 112%) given the number of participating laboratories (10) and ultra-trace metal concentrations involved. Data dispersion of TWA concentrations was mainly explained by uncertainties generated during DGT handling and analytical procedure steps. We highlight that DGT handling is critical for metals such as Cd, Cr and Zn, implying that DGT assembly/dismantling should be performed in very clean conditions. Using a unique dataset, we demonstrated that DGT markedly lowered the LOQ in comparison to spot sampling and stressed the need for accurate data calculation.