917 resultados para world model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. The present data set presents depth integrated values of diazotrophs nitrogen fixation rates, computed from a collection of source data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a permanent, stable ice sheet in East Antarctica happened during the middle Miocene, about 14 million years (Myr) ago. The middle Miocene therefore represents one of the distinct phases of rapid change in the transition from the "greenhouse" of the early Eocene to the "icehouse" of the present day. Carbonate carbon isotope records of the period immediately following the main stage of ice sheet development reveal a major perturbation in the carbon system, represented by the positive d13C excursion known as carbon maximum 6 ("M6"), which has traditionally been interpreted as reflecting increased burial of organic matter and atmospheric pCO2 drawdown. More recently, it has been suggested that the d13C excursion records a negative feedback resulting from the reduction of silicate weathering and an increase in atmospheric pCO2. Here we present high-resolution multi-proxy (alkenone carbon and foraminiferal boron isotope) records of atmospheric carbon dioxide and sea surface temperature across CM6. Similar to previously published records spanning this interval, our records document a world of generally low (~300 ppm) atmospheric pCO2 at a time generally accepted to be much warmer than today. Crucially, they also reveal a pCO2 decrease with associated cooling, which demonstrates that the carbon burial hypothesis for CM6 is feasible and could have acted as a positive feedback on global cooling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. This is a gridded data product about diazotrophic organisms . There are 6 variables. Each variable is gridded on a dimension of 360 (longitude) * 180 (latitude) * 33 (depth) * 12 (month). The first group of 3 variables are: (1) number of biomass observations, (2) biomass, and (3) special nifH-gene-based biomass. The second group of 3 variables is same as the first group except that it only grids non-zero data. We have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling more than 11,000 direct field measurements including 3 sub-databases: (1) nitrogen fixation rates, (2) cyanobacterial diazotroph abundances from cell counts and (3) cyanobacterial diazotroph abundances from qPCR assays targeting nifH genes. Biomass conversion factors are estimated based on cell sizes to convert abundance data to diazotrophic biomass. Data are assigned to 3 groups including Trichodesmium, unicellular diazotrophic cyanobacteria (group A, B and C when applicable) and heterocystous cyanobacteria (Richelia and Calothrix). Total nitrogen fixation rates and diazotrophic biomass are calculated by summing the values from all the groups. Some of nitrogen fixation rates are whole seawater measurements and are used as total nitrogen fixation rates. Both volumetric and depth-integrated values were reported. Depth-integrated values are also calculated for those vertical profiles with values at 3 or more depths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent years have witnessed increased development of small, autonomous fixed-wing Unmanned Aerial Vehicles (UAVs). In order to unlock widespread applicability of these platforms, they need to be capable of operating under a variety of environmental conditions. Due to their small size, low weight, and low speeds, they require the capability of coping with wind speeds that are approaching or even faster than the nominal airspeed. In this thesis, a nonlinear-geometric guidance strategy is presented, addressing this problem. More broadly, a methodology is proposed for the high-level control of non-holonomic unicycle-like vehicles in the presence of strong flowfields (e.g. winds, underwater currents) which may outreach the maximum vehicle speed. The proposed strategy guarantees convergence to a safe and stable vehicle configuration with respect to the flowfield, while preserving some tracking performance with respect to the target path. As an alternative approach, an algorithm based on Model Predictive Control (MPC) is developed, and a comparison between advantages and disadvantages of both approaches is drawn. Evaluations in simulations and a challenging real-world flight experiment in very windy conditions confirm the feasibility of the proposed guidance approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this master’s thesis, I examine the development of writer-characters and metafiction from John Irving’s The World According to Garp to Last Night in Twisted River and how this development relates to the development of late twentieth century postmodern literary theory to twenty-first century post-postmodern literary theory. The purpose of my study is to determine how the prominently postmodern feature metafiction, created through the writer-character’s stories-within-stories, has changed in form and function in the two novels published thirty years apart from one another, and what possible features this indicates for future post-postmodern theory. I establish my theoretical framework on the development of metafiction largely on late twentieth-century models of author and authorship as discussed by Roland Barthes, Wayne Booth and Michel Foucault. I base my close analysis of metafiction mostly on Linda Hutcheon’s model of overt and covert metafiction. At the end of my study, I examine Irving’s later novel through Suzanne Rohr’s models of reality constitution and fictional reality. The analysis of the two novels focuses on excerpts that feature the writer-characters, their stories-within-stories and the novels’ other characters and the narrators’ evaluations of these two. I draw examples from both novels, but I illustrate my choice of focus on the novels at the beginning of each section. Through this, I establish a method of analysis that best illustrates the development as a continuum from pre-existing postmodern models and theories to the formation of new post-postmodern theory. Based on my findings, the thesis argues that twenty-first century literary theory has moved away from postmodern overt deconstruction of the narrative and its meaning. New post-postmodern literary theory reacquires the previously deconstructed boundaries that define reality and truth and re-establishes them as having intrinsic value that cannot be disputed. In establishing fictional reality as self-governing and non-intrudable, post-postmodern theory takes a stance against postmodern nihilism, which indicates the re-founded, non-questionable value of the text’s reality. To continue mapping other possible features of future post-postmodern theory, I recommend further analysis solely on John Irving’s novels’ published in the twenty-first century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The International Conference on Advanced Materials, Structures and Mechanical Engineering 2015 (ICAMSME 2015) was held on May 29-31, Incheon, South-Korea. The conference was attended by scientists, scholars, engineers and students from universities, research institutes and industries all around the world to present on going research activities. This proceedings volume assembles papers from various professionals engaged in the fields of materials, structures and mechanical engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphs are powerful tools to describe social, technological and biological networks, with nodes representing agents (people, websites, gene, etc.) and edges (or links) representing relations (or interactions) between agents. Examples of real-world networks include social networks, the World Wide Web, collaboration networks, protein networks, etc. Researchers often model these networks as random graphs. In this dissertation, we study a recently introduced social network model, named the Multiplicative Attribute Graph model (MAG), which takes into account the randomness of nodal attributes in the process of link formation (i.e., the probability of a link existing between two nodes depends on their attributes). Kim and Lesckovec, who defined the model, have claimed that this model exhibit some of the properties a real world social network is expected to have. Focusing on a homogeneous version of this model, we investigate the existence of zero-one laws for graph properties, e.g., the absence of isolated nodes, graph connectivity and the emergence of triangles. We obtain conditions on the parameters of the model, so that these properties occur with high or vanishingly probability as the number of nodes becomes unboundedly large. In that regime, we also investigate the property of triadic closure and the nodal degree distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Salinity gradient power (SGP) is the energy that can be obtained from the mixing entropy of two solutions with a different salt concentration. River estuary, as a place for mixing salt water and fresh water, has a huge potential of this renewable energy. In this study, this potential in the estuaries of rivers leading to the Persian Gulf and the factors affecting it are analysis and assessment. Since most of the full water rivers are in the Asia, this continent with the potential power of 338GW is a second major source of energy from the salinity gradient power in the world (Wetsus institute, 2009). Persian Gulf, with the proper salinity gradient in its river estuaries, has Particular importance for extraction of this energy. Considering the total river flow into the Persian Gulf, which is approximately equal to 3486 m3/s, the amount of theoretical extractable power from salinity gradient in this region is 5.2GW. Iran, with its numerous rivers along the coast of the Persian Gulf, has a great share of this energy source. For example, with study calculations done on data from three hydrometery stations located on the Arvand River, Khorramshahr Station with releasing 1.91M/ energy which is obtained by combining 1.26m3 river water with 0.74 m3 sea water, is devoted to itself extracting the maximum amount of extractable energy. Considering the average of annual discharge of Arvand River in Khorramshahr hydrometery station, the amount of theoretical extractable power is 955 MW. Another part of parameters that are studied in this research, are the intrusion length of salt water and its flushing time in the estuary that have a significant influence on the salinity gradient power. According to the calculation done in conditions HWS and the average discharge of rivers, the maximum of salinity intrusion length in to the estuary of the river by 41km is related to Arvand River and the lowest with 8km is for Helle River. Also the highest rate of salt water flushing time in the estuary with 9.8 days is related to the Arvand River and the lowest with 3.3 days is for Helle River. Influence of these two parameters on reduces the amount of extractable energy from salinity gradient power as well as can be seen in the estuaries of the rivers studied. For example, at the estuary of the Arvand River in the interval 8.9 days, salinity gradient power decreases 9.2%. But another part of this research focuses on the design of a suitable system for extracting electrical energy from the salinity gradient. So far, five methods have been proposed to convert this energy to electricity that among them, reverse electro-dialysis (RED) method and pressure-retarded osmosis (PRO) method have special importance in practical terms. In theory both techniques generate the same amount of energy from given volumes of sea and river water with specified salinity; in practice the RED technique seems to be more attractive for power generation using sea water and river water. Because it is less necessity of salinity gradient to PRO method. In addition to this, in RED method, it does not need to use turbine to change energy and the electricity generation is started when two solutions are mixed. In this research, the power density and the efficiency of generated energy was assessment by designing a physical method. The physical designed model is an unicellular reverse electro-dialysis battery with nano heterogenic membrane has 20cmx20cm dimension, which produced power density 0.58 W/m2 by using river water (1 g NaCl/lit) and sea water (30 g NaCl/lit) in laboratorial condition. This value was obtained because of nano method used on the membrane of this system and suitable design of the cell which led to increase the yield of the system efficiency 11% more than non nano ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple model based on, the maximum energy that an athlete can produce in a small time interval is used to describe the high and long jump. Conservation of angular momentum is used to explain why an athlete should, run horizontally to perform a vertical jump. Our results agree with world records. (c) 2005 American Association of Physics Teachers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the results and propositions of organizational knowledge management research conducted in the period 2001-2007. This longitudinal study had the unique goal of investigating and analyzing “Knowledge Management” (KM) processes effectively implemented in world class organizations. The main objective was to investigate and analyze the conceptions, motivations, practices, metrics and results of KM processes implemented in different industries. The first set of studies involved 20 world cases related in the literature and served as a basis for a theoretical framework entitled “KM Integrative Conceptual Mapping Proposition”. This theoretical proposal was then tested in a qualitative study in three large organizations in Brazil. The results of the qualitative study validated the mapping proposition and left questions for new research concerning the implementation of a knowledge-based organizational model strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.