818 resultados para Question of use
Resumo:
An assessment of the status of the Atlantic stock of red drum is conducted using recreational and commercial data from 1986 through 1998. This assessment updates data and analyses from the 1989, 1991, 1992 and 1995 stock assessments on Atlantic coast red drum (Vaughan and Helser, 1990; Vaughan 1992; 1993; 1996). Since 1981, coastwide recreational catches ranged between 762,300 pounds in 1980 and 2,623,900 pounds in 1984, while commercial landings ranged between 60,900 pounds in 1997 and 422,500 pounds in 1984. In weight of fish caught, Atlantic red drum constitute predominantly a recreational fishery (ranging between 85 and 95% during the 1990s). Commercially, red drum continue to be harvested as part of mixed species fisheries. Using available length-frequency distributions and age-length keys, recreational and commercial catches are converted to catch in numbers at age. Separable and tuned virtual population analyses are conducted on the catch in numbers at age to obtain estimates of fishing mortality rates and population size (including recruitment to age 1). In tum, these estimates of fishing mortality rates combined with estimates of growth (length and weight), sex ratios, sexual maturity and fecundity are used to estimate yield per recruit, escapement to age 4, and static (or equilibrium) spawning potential ratio (static SPR, based on both female biomass and egg production). Three virtual analysis approaches (separable, spreadsheet, and FADAPT) were applied to catch matrices for two time periods (early: 1986-1991, and late: 1992-1998) and two regions (Northern: North Carolina and north, and Southern: South Carolina through east coast of Florida). Additional catch matrices were developed based on different treatments for the catch-and-release recreationally-caught red drum (B2-type). These approaches included assuming 0% mortality (BASEO) versus 10% mortality for B2 fish. For the 10% mortality on B2 fish, sizes were assumed the same as caught fish (BASEl), or positive difference in size distribution between the early period and the later period (DELTA), or intermediate (PROP). Hence, a total of 8 catch matrices were developed (2 regions, and 4 B2 assumptions for 1986-1998) to which the three VPA approaches were applied. The question of when offshore emigration or reduced availability begins (during or after age 3) continues to be a source of bias that tends to result in overestimates of fishing mortality. Additionally, the continued assumption (Vaughan and Helser, 1990; Vaughan 1992; 1993; 1996) of no fishing mortality on adults (ages 6 and older), causes a bias that results in underestimates of fishing mortality for adult ages (0 versus some positive value). Because of emigration and the effect of the slot limit for the later period, a range in relative exploitations of age 3 to age 2 red drum was considered. Tuning indices were developed from the MRFSS, and state indices for use in the spreadsheet and FADAPT VPAs. The SAFMC Red Drum Assessment Group (Appendix A) favored the FADAPT approach with catch matrix based on DELTA and a selectivity for age 3 relative to age 2 of 0.70 for the northern region and 0.87 for the southern region. In the northern region, estimates of static SPR increased from about 1.3% for the period 1987-1991 to approximately 18% (15% and 20%) for the period 1992-1998. For the southern region, estimates of static SPR increased from about 0.5% for the period 1988-1991 to approximately 15% for the period 1992-1998. Population models used in this assessment (specifically yield per recruit and static spawning potential ratio) are based on equilibrium assumptions: because no direct estimates are available as to the current status of the adult stock, model results imply potential longer term, equilibrium effects. Because current status of the adult stock is unknown, a specific rebuilding schedule cannot be determined. However, the duration of a rebuilding schedule should reflect, in part, a measure of the generation time of the fish species under consideration. For a long-lived, but relatively early spawning, species as red drum, mean generation time would be on the order of 15 to 20 years based on age-specific egg production. Maximum age is 50 to 60 years for the northern region, and about 40 years for the southern region. The ASMFC Red Drum Board's first phase recovery goal of increasing %SPR to at least 10% appears to have been met. (PDF contains 79 pages)
Resumo:
Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.
The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.
The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.
The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.
Resumo:
As evolution progresses, developmental changes occur. Genes lose and gain molecular partners, regulatory sequences, and new functions. As a consequence, tissues evolve alternative methods to develop similar structures, more or less robust. How this occurs is a major question in biology. One method of addressing this question is by examining the developmental and genetic differences between similar species. Several studies of nematodes Pristionchus pacificus and Oscheius CEW1 have revealed various differences in vulval development from the well-studied C. elegans (e.g. gonad induction, competence group specification, and gene function.)
I approached the question of developmental change in a similar manner by using Caenorhabditis briggsae, a close relative of C. elegans. C. briggsae allows the use of transgenic approaches to determine developmental changes between species. We determined subtle changes in the competence group, in 1° cell specification, and vulval lineage.
We also analyzed the let-60 gene in four nematode species. We found conservation in the codon identity and exon-intron boundaries, but lack of an extended 3' untranslated region in Caenorhabditis briggsae.
Resumo:
This thesis describes a series of experimental studies of lead chalcogenide thermoelectric semiconductors, mainly PbSe. Focusing on a well-studied semiconductor and reporting good but not extraordinary zT, this thesis distinguishes itself by answering the following questions that haven’t been answered: What represents the thermoelectric performance of PbSe? Where does the high zT come from? How (and how much) can we make it better? For the first question, samples were made with highest quality. Each transport property was carefully measured, cross-verified and compared with both historical and contemporary report to overturn commonly believed underestimation of zT. For n- and p-type PbSe zT at 850 K can be 1.1 and 1.0, respectively. For the second question, a systematic approach of quality factor B was used. In n-type PbSe zT is benefited from its high-quality conduction band that combines good degeneracy, low band mass and low deformation potential, whereas zT of p-type is boosted when two mediocre valence bands converge (in band edge energy). In both cases the thermal conductivity from PbSe lattice is inherently low. For the third question, the use of solid solution lead chalcogenide alloys was first evaluated. Simple criteria were proposed to help quickly evaluate the potential of improving zT by introducing atomic disorder. For both PbTe1-xSex and PbSe1-xSx, the impacts in electron and phonon transport compensate each other. Thus, zT in each case was roughly the average of two binary compounds. In p-type Pb1-xSrxSe alloys an improvement of zT from 1.1 to 1.5 at 900 K was achieved, due to the band engineering effect that moves the two valence bands closer in energy. To date, making n-type PbSe better hasn’t been accomplished, but possible strategy is discussed.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.
More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.
The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.
Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.
Resumo:
Qens/wins 2014 - 11th International Conference on Quasielastic Neutron Scattering and 6th International Workshop on Inelastic Neutron Spectrometers / editado por:Frick, B; Koza, MM; Boehm, M; Mutka, H
Resumo:
Distribution, movements, and habitat use of small (<46 cm, juveniles and individuals of unknown maturity) striped bass (Morone saxatilis) were investigated with multiple techniques and at multiple spatial scales (surveys and tag-recapture in the estuary and ocean, and telemetry in the estuary) over multiple years to determine the frequency and duration of use of non-natal estuaries. These unique comparisons suggest, at least in New Jersey, that smaller individuals (<20 cm) may disperse from natal estuaries and arrive in non-natal estuaries early in life and take up residence for several years. During this period of estuarine residence, individuals spend all seasons primarily in the low salinity portions of the estuary. At larger sizes, they then leave these non-natal estuaries to begin coastal migrations with those individuals from nurseries in natal estuaries. These composite observations of frequency and duration of habitat use indicate that non-natal estuaries may provide important habitat for a portion of the striped bass population.
Resumo:
Plant growth at extremely high elevations is constrained by high daily thermal amplitude, strong solar radiation and water scarcity. These conditions are particularly harsh in the tropics, where the highest elevation treelines occur. In this environment, the maintenance of a positive carbon balance involves protecting the photosynthetic apparatus and taking advantage of any climatically favourable periods. To characterize photoprotective mechanisms at such high elevations, and particularly to address the question of whether these mechanisms are the same as those previously described in woody plants along extratropical treelines, we have studied photosynthetic responses in Polylepis tarapacana Philippi in the central Andes (18 degrees S) along an elevational gradient from 4300 to 4900 m. For comparative purposes, this gradient has been complemented with a lower elevation site (3700 m) where another Polylepis species (P. rugulosa Bitter) occurs. During the daily cycle, two periods of photosynthetic activity were observed: one during the morning when, despite low temperatures, assimilation was high; and the second starting at noon when the stomata closed because of a rise in the vapour pressure deficit and thermal dissipation is prevalent over photosynthesis. From dawn to noon there was a decrease in the content of antenna pigments (chlorophyll b and neoxanthin), together with an increase in the content of xanthophyll cycle carotenoids. These results could be caused by a reduction in the antenna size along with an increase in photoprotection. Additionally, photoprotection was enhanced by a partial overnight retention of de-epoxized xanthophylls. The unique combination of all of these mechanisms made possible the efficient use of the favourable conditions during the morning while still providing enough protection for the rest of the day. This strategy differs completely from that of extratropical mountain trees, which uncouple light-harvesting and energy-use during long periods of unfavourable, winter conditions.
Resumo:
This paper briefly outlines the implications of making a decision on the most appropriate alternative for carrying out stock assessments and the reasons for previous failures to conserve finfish stocks for sustainable use. The Mathews (1987) approach utilizing Age-Length Catch-Effort Keys (ALCEK) is briefly reviewed, and a suggested overall approach for the assessment of the finfish resources of the Caribbean community is outlined. With recent initiatives towards use of the precautionary approach and reference points, Carribean community countries are advised to revisit the question of the models to be utilized for the assessment of their fish stocks, paying due attention to the quantity, quality and applicability of data now being collected.
Resumo:
This paper compares a number of different moment-curvature models for cracked concrete sections that contain both steel and external fiber-reinforced polymer (FRP) reinforcement. The question of whether to use a whole-section analysis or one that considers the FRP separately is discussed. Five existing and three new models are compared with test data for moment-curvature or load deflection behavior, and five models are compared with test results for plate-end debonding using a global energy balance approach (GEBA). A proposal is made for the use of one of the simplified models. The availability of a simplified model opens the way to the production of design aids so that the GEBA can be made available to practicing engineers through design guides and parametric studies. Copyright © 2014, American Concrete Institute.
Resumo:
The atomistic pseudopotential quantum mechanical calculations are used to study the transport in million atom nanosized metal-oxide-semiconductor field-effect transistors. In the charge self-consistent calculation, the quantum mechanical eigenstates of closed systems instead of scattering states of open systems are calculated. The question of how to use these eigenstates to simulate a nonequilibrium system, and how to calculate the electric currents, is addressed. Two methods to occupy the electron eigenstates to yield the charge density in a nonequilibrium condition are tested and compared. One is a partition method and another is a quasi-Fermi level method. Two methods are also used to evaluate the current: one uses the ballistic and tunneling current approximation, another uses the drift-diffusion method. (C) 2009 American Institute of Physics. [doi:10.1063/1.3248262]
Resumo:
ROSSI: Emergence of communication in Robots through Sensorimotor and Social Interaction, T. Ziemke, A. Borghi, F. Anelli, C. Gianelli, F. Binkovski, G. Buccino, V. Gallese, M. Huelse, M. Lee, R. Nicoletti, D. Parisi, L. Riggio, A. Tessari, E. Sahin, International Conference on Cognitive Systems (CogSys 2008), University of Karlsruhe, Karlsruhe, Germany, 2008 Sponsorship: EU-FP7
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
The main research question of this thesis is how do grand strategies form. Grand strategy is defined as a state's coherent and consistent pattern of behavior over a long period of time in search of an overarching goal. The political science literature usually explains the formation of grand strategies by using a planning (or design) model. In this dissertation, I use primary sources, interviews with former government officials, and historical scholarship to show that the formation of grand strategy is better understood using a model of emergent learning imported from the business world. My two case studies examine the formation of American grand strategy during the Cold War and the post-Cold War eras. The dissertation concludes that in both these strategic eras the dominating grand strategies were formed primarily by emergent learning rather than flowing from advanced designs.