930 resultados para E-Serial Licenses Update
Resumo:
These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.
More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.
The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.
Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.
Resumo:
The Inter-American Tropical Tuna Commission (IATTC) staff has been sampling the size distributions of tunas in the eastern Pacific Ocean (EPO) since 1954, and the species composition of the catches since 2000. The IATTC staff use the data from the species composition samples, in conjunction with observer and/or logbook data, and unloading data from the canneries to estimate the total annual catches of yellowfin (Thunnus albacares), skipjack (Katsuwonus pelamis), and bigeye (Thunnus obesus) tunas. These sample data are collected based on a stratified sampling design. I propose an update of the stratification of the EPO into more homogenous areas in order to reduce the variance in the estimates of the total annual catches and incorporate the geographical shifts resulting from the expansion of the floating-object fishery during the 1990s. The sampling model used by the IATTC is a stratified two-stage (cluster) random sampling design with first stage units varying (unequal) in size. The strata are month, area, and set type. Wells, the first cluster stage, are selected to be sampled only if all of the fish were caught in the same month, same area, and same set type. Fish, the second cluster stage, are sampled for lengths, and independently, for species composition of the catch. The EPO is divided into 13 sampling areas, which were defined in 1968, based on the catch distributions of yellowfin and skipjack tunas. This area stratification does not reflect the multi-species, multi-set-type fishery of today. In order to define more homogenous areas, I used agglomerative cluster analysis to look for groupings of the size data and the catch and effort data for 2000–2006. I plotted the results from both datasets against the IATTC Sampling Areas, and then created new areas. I also used the results of the cluster analysis to update the substitution scheme for strata with catch, but no sample. I then calculated the total annual catch (and variance) by species by stratifying the data into new Proposed Sampling Areas and compared the results to those reported by the IATTC. Results showed that re-stratifying the areas produced smaller variances of the catch estimates for some species in some years, but the results were not significant.
Resumo:
Background: The high demanding computational requirements necessary to carry out protein motion simulations make it difficult to obtain information related to protein motion. On the one hand, molecular dynamics simulation requires huge computational resources to achieve satisfactory motion simulations. On the other hand, less accurate procedures such as interpolation methods, do not generate realistic morphs from the kinematic point of view. Analyzing a protein's movement is very similar to serial robots; thus, it is possible to treat the protein chain as a serial mechanism composed of rotational degrees of freedom. Recently, based on this hypothesis, new methodologies have arisen, based on mechanism and robot kinematics, to simulate protein motion. Probabilistic roadmap method, which discretizes the protein configurational space against a scoring function, or the kinetostatic compliance method that minimizes the torques that appear in bonds, aim to simulate protein motion with a reduced computational cost. Results: In this paper a new viewpoint for protein motion simulation, based on mechanism kinematics is presented. The paper describes a set of methodologies, combining different techniques such as structure normalization normalization processes, simulation algorithms and secondary structure detection procedures. The combination of all these procedures allows to obtain kinematic morphs of proteins achieving a very good computational cost-error rate, while maintaining the biological meaning of the obtained structures and the kinematic viability of the obtained motion. Conclusions: The procedure presented in this paper, implements different modules to perform the simulation of the conformational change suffered by a protein when exerting its function. The combination of a main simulation procedure assisted by a secondary structure process, and a side chain orientation strategy, allows to obtain a fast and reliable simulations of protein motion.
Resumo:
This document aims to describe an update of the implementation of the J48Consolidated class within WEKA platform. The J48Consolidated class implements the CTC algorithm [2][3] which builds a unique decision tree based on a set of samples. The J48Consolidated class extends WEKA’s J48 class which implements the well-known C4.5 algorithm. This implementation was described in the technical report "J48Consolidated: An implementation of CTC algorithm for WEKA". The main, but not only, change in this update is the integration of the notion of coverage in order to determine the number of samples to be generated to build a consolidated tree. We define coverage as the percentage of examples of the training sample present in –or covered by– the set of generated subsamples. So, depending on the type of samples that we use, we will need more or less samples in order to achieve a specific value of coverage.
Resumo:
The findings are presented of a survey conducted on Lake Kariba in order to update the basic statistics of the inshore fishery from a previous survey conducted in 1993 and also to collect some socioeconomic data regarding the artisanal fishery. The 1998 frame survey provided information on fishers both for the whole fishery and per village and changes in numbers; socio-economic characteristics as well as the number of fishing units and gear are included.
Resumo:
The purpose of this article is to update and build on the approximate 10,000 item collection of the Harbor Branch Oceanographic Institute Library. This article will present a history of Harbor Branch and its library, and a literature review, outlining the collection development methods of other marine science libraries and academic libraries. The article will relate brief histories of three marine science libraries. A comparative table is constructed to compare Harbor Branch Library with three marine science libraries. The methodology, or how the table was created, is explained. The comparative table will be shown and analyzed, and the results of the table discussed. Finally, some recommendations for improvement of the Harbor Branch Library will be presented.
Resumo:
The Strategy on International Fisheries Research's activities on information and aquaculture in Asia, SubSaharan Africa, Latin and North America are presented.
Resumo:
Some interesting ideas on improving the cost-effectiveness of feeding in semi-intensive finfish aquaculture are presented.
Resumo:
Length-weight relationships of 316 reef and lagoon fish from New Caledonia (SW Pacific Ocean) belonging to 68 families are computed. A total of 43,750 individuals was used for this purpose. Fish were sampled by different techniques such as rotenone poisoning, handline and bottom longline fishing, gill and trammel nets, and trawling in various isotopes (coral reefs, lagoon bottoms and mangroves).
Resumo:
A reassessment of the estimates of growth, mortality and recruitment patterns of Nile Perch, Lates niloticus was made based on data from commercial landings collected during the Catch Assessment Survey Programme. Two sets of length frequency data, one each from beach seining and hook and line fisheries, were analyzed. Values of L8 = 169 and 230 (cm TL) and K= 0.18 yr-1 and 0.195 yr-1 were obtained. The total mortality estimates from the catch curve analysis were Z = 0.72 yr-1 and 0.94 yr-1, respectively, with a natural mortality M of about 0.35 for a mean environmental temperature of 27oC. The highest peak for recruitment was in November, December and January with a minor one in June, indicating recruitment of two cohorts per year. These results are discussed and compared to previously available information on L. niloticus in Lake Victoria.