954 resultados para Model information
Resumo:
Microtubules (MT) are composed of 13 protofilaments, each of which is a series of two-state tubulin dimers. In the MT wall, these dimers can be pictured as "lattice" sites similar to crystal lattices. Based on the pseudo-spin model, two different location states of the mobile electron in each dimer are proposed. Accordingly, the MT wall is described as an anisotropic two-dimensional (2D) pseudo-spin system considering a periodic triangular "lattice". Because three different "spin-spin" interactions in each cell exist periodically in the whole MT wall, the system may be shown to be an array of three types of two-pseudo-spin-state dimers. For the above-mentioned condition, the processing of quantum information is presented by using the scheme developed by Lloyd.
Resumo:
In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.
Resumo:
Otto Kelland was a truly unique individual in Newfoundland. During his long life he had several careers from being a prison superintendant to being an instructor at Marine Institute. During his life Kelland made hundreds of wooden boat models. They are beautifuly hand-crafted and represented the type of watercraft used by fishermen in Newfoundland. The collection of boat models made by Otto Kelland and owned by Marine Institute made an ideal object to be digitalized. In particular the collection of dories was an ideal group to be digitized. They were housed in one cabinet and accompanied by hand-written documents describing each model. The Digital Archives Initiative (DAI) is a “gateway to the learning and research-based cultural resources held by Memorial University of Newfoundland and partnering organizations.” The DAI hosts a variety of collections which together reinforce the importance, past and present, of Newfoundland and Labrador's history and culture. I will give an oral presentation of the project followed by a demonstration of the Otto Kelland Dories exhibit on the Digital Archives Initiative (DAI) at Memorial University of Newfoundland. I will be happy to answer questions following my presentation.
Resumo:
There is a widespread recognition of the need for better information sharing and provision to improve the viability of end-of-life (EOL) product recovery operations. The emergence of automated data capture and sharing technologies such as RFID, sensors and networked databases has enhanced the ability to make product information; available to recoverers, which will help them make better decisions regarding the choice of recovery option for EOL products. However, these technologies come with a cost attached to it, and hence the question 'what is its value?' is critical. This paper presents a probabilistic approach to model product recovery decisions and extends the concept of Bayes' factor for quantifying the impact of product information on the effectiveness of these decisions. Further, we provide a quantitative examination of the factors that influence the value of product information, this value depends on three factors: (i) penalties for Type I and Type II errors of judgement regarding product quality; (ii) prevalent uncertainty regarding product quality and (iii) the strength of the information to support/contradict the belief. Furthermore, we show that information is not valuable under all circumstances and derive conditions for achieving a positive value of information. © 2010 Taylor & Francis.
Resumo:
The eel, Anguilla anguilla (L.), stock of the river Elbe severely decreased during the last decades. Detailed knowledge of the stock dynamics in freshwater and especially of the impact factors is necessary to take effective measures for stock conservation and improvement. The dynamics of the eel stock are modelled based on immigration, stocking, natural mortality and mortalities caused by fishing, angling, cormorants and hydropower plants. The model estimates the number of emigrating eel. Moreover, it enables to study the sensitivity of the estimates related to the uncertainty of the source data of the different influencing factors. The model may be used to develop management strategies and to assess the effi ciency of different management options. Zusammenfassung Der Aalbestand im Elbesystem ist in den letzten Jahrzehnten stark zur
Resumo:
A generalized Bayesian population dynamics model was developed for analysis of historical mark-recapture studies. The Bayesian approach builds upon existing maximum likelihood methods and is useful when substantial uncertainties exist in the data or little information is available about auxiliary parameters such as tag loss and reporting rates. Movement rates are obtained through Markov-chain Monte-Carlo (MCMC) simulation, which are suitable for use as input in subsequent stock assessment analysis. The mark-recapture model was applied to English sole (Parophrys vetulus) off the west coast of the United States and Canada and migration rates were estimated to be 2% per month to the north and 4% per month to the south. These posterior parameter distributions and the Bayesian framework for comparing hypotheses can guide fishery scientists in structuring the spatial and temporal complexity of future analyses of this kind. This approach could be easily generalized for application to other species and more data-rich fishery analyses.
Resumo:
Many modern stock assessment methods provide the machinery for determining the status of a stock in relation to certain reference points and for estimating how quickly a stock can be rebuilt. However, these methods typically require catch data, which are not always available. We introduce a model-based framework for estimating reference points, stock status, and recovery times in situations where catch data and other measures of absolute abundance are unavailable. The specif ic estimator developed is essentially an age-structured production model recast in terms relative to pre-exploitation levels. A Bayesian estimation scheme is adopted to allow the incorporation of pertinent auxiliary information such as might be obtained from meta-analyses of similar stocks or anecdotal observations. The approach is applied to the population of goliath grouper (Epinephelus itajara) off southern Florida, for which there are three indices of relative abundance but no reliable catch data. The results confirm anecdotal accounts of a marked decline in abundance during the 1980s followed by a substantial increase after the harvest of goliath grouper was banned in 1990. The ban appears to have reduced fishing pressure to between 10% and 50% of the levels observed during the 1980s. Nevertheless, the predicted fishing mortality rate under the ban appears to remain substantial, perhaps owing to illegal harvest and depth-related release mortality. As a result, the base model predicts that there is less than a 40% chance that the spawning biomass will recover to a level that would produce a 50% spawning potential ratio.
Resumo:
We present a growth analysis model that combines large amounts of environmental data with limited amounts of biological data and apply it to Corbicula japonica. The model uses the maximum-likelihood method with the Akaike information criterion, which provides an objective criterion for model selection. An adequate distribution for describing a single cohort is selected from available probability density functions, which are expressed by location and scale parameters. Daily relative increase rates of the location parameter are expressed by a multivariate logistic function with environmental factors for each day and categorical variables indicating animal ages as independent variables. Daily relative increase rates of the scale parameter are expressed by an equation describing the relationship with the daily relative increase rate of the location parameter. Corbicula japonica grows to a modal shell length of 0.7 mm during the first year in Lake Abashiri. Compared with the attain-able maximum size of about 30 mm, the growth of juveniles is extremely slow because their growth is less susceptible to environmental factors until the second winter. The extremely slow growth in Lake Abashiri could be a geographical genetic variation within C. japonica.
Resumo:
A density prediction model for juvenile brown shrimp (Farfantepenaeus aztecus) was developed by using three bottom types, five salinity zones, and four seasons to quantify patterns of habitat use in Galveston Bay, Texas. Sixteen years of quantitative density data were used. Bottom types were vegetated marsh edge, submerged aquatic vegetation, and shallow nonvegetated bottom. Multiple regression was used to develop density estimates, and the resultant formula was then coupled with a geographical information system (GIS) to provide a spatial mosaic (map) of predicted habitat use. Results indicated that juvenile brown shrimp (<100 mm) selected vegetated habitats in salinities of 15−25 ppt and that seagrasses were selected over marsh edge where they co-occurred. Our results provide a spatially resolved estimate of high-density areas that will help designate essential fish habitat (EFH) in Galveston Bay. In addition, using this modeling technique, we were able to provide an estimate of the overall population of juvenile brown shrimp (<100 mm) in shallow water habitats within the bay of approximately 1.3 billion. Furthermore, the geographic range of the model was assessed by plotting observed (actual) versus expected (model) brown shrimp densities in three other Texas bays. Similar habitat-use patterns were observed in all three bays—each having a coefficient of determination >0.50. These results indicate that this model may have a broader geographic application and is a plausible approach in refining current EFH designations for all Gulf of Mexico estuaries with similar geomorphological and hydrological characteristics.
Resumo:
A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.
Resumo:
Effective dialogue management is critically dependent on the information that is encoded in the dialogue state. In order to deploy reinforcement learning for policy optimization, dialogue must be modeled as a Markov Decision Process. This requires that the dialogue statemust encode all relevent information obtained during the dialogue prior to that state. This can be achieved by combining the user goal, the dialogue history, and the last user action to form the dialogue state. In addition, to gain robustness to input errors, dialogue must be modeled as a Partially Observable Markov Decision Process (POMDP) and hence, a distribution over all possible states must be maintained at every dialogue turn. This poses a potential computational limitation since there can be a very large number of dialogue states. The Hidden Information State model provides a principled way of ensuring tractability in a POMDP-based dialogue model. The key feature of this model is the grouping of user goals into partitions that are dynamically built during the dialogue. In this article, we extend this model further to incorporate the notion of complements. This allows for a more complex user goal to be represented, and it enables an effective pruning technique to be implemented that preserves the overall system performance within a limited computational resource more effectively than existing approaches. © 2011 ACM.
Resumo:
Forest mapping over mountainous terrains is difficult because of high relief Although digital elevation models (DEMs) are often useful to improve mapping accuracy, high quality DEMs are seldom available over large areas, especially in developing countries
Resumo:
Any linearised theory of the initiation of friction-excited vibration via instability of the state of steady sliding requires information about the dynamic friction force in the form of a frequency response function for sliding friction. Recent measurements of this function for an interface consisting of a nylon pin against a glass disc are used to probe the underlying constitutive law. Results are compared to linearised predictions from the simplest ratestate model of friction, and a ratetemperature model. In both cases the observed variation with frequency is not compatible with the model predictions, although there are some significant points of similarity. The most striking result relates to variation of the normal load: any theory embodying the Coulomb relation F∝N would predict behaviour entirely at variance with the measurements, even though the steady friction force obtained during the same measurements does follow the Coulomb law. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Recent studies examining adaptation to unexpected changes in the mechanical environment highlight the use of position error in the adaptation process. However, force information is also available. In this chapter, we examine adaptation processes in three separate studies where the mechanical environment was changed intermittently. We compare the expected consequences of using position error and force information in the changes to motor commands following a change in the mechanical environment. In general, our results support the use of position error over force information and are consistent with current computational models of motor learning. However, in situations where the change in the mechanical environment eliminates position error the central nervous system does not necessarily respond as would be predicted by these models. We suggest that it is necessary to take into account the statistics of prior experience to account for our observations. Another deficiency in these models is the absence of a mechanism for modulating limb mechanical impedance during adaptation. We propose a relatively simple computational model based on reflex responses to perturbations which is capable of accounting for iterative changes in temporal patterns of muscle co-activation.
Resumo:
The Internet of Things (IOT) concept and enabling technologies such as RFID offer the prospect of linking the real world of physical objects with the virtual world of information technology to improve visibility and traceability information within supply chains and across the entire lifecycles of products, as well as enabling more intuitive interactions and greater automation possibilities. There is a huge potential for savings through process optimization and profit generation within the IOT, but the sharing of financial benefits across companies remains an unsolved issue. Existing approaches towards sharing of costs and benefits have failed to scale so far. The integration of payment solutions into the IOT architecture could solve this problem. We have reviewed different possible levels of integration. Multiple payment solutions have been researched. Finally we have developed a model that meets the requirements of the IOT in relation to openness and scalability. It supports both hardware-centric and software-centric approaches to integration of payment solutions with the IOT. Different requirements concerning payment solutions within the IOT have been defined and considered in the proposed model. Possible solution providers include telcos, e-payment service providers and new players such as banks and standardization bodies. The proposed model of integrating the Internet of Things with payment solutions will lower the barrier to invoicing for the more granular visibility information generated using the IOT. Thus, it has the potential to enable recovery of the necessary investments in IOT infrastructure and accelerate adoption of the IOT, especially for projects that are only viable when multiple benefits throughout the supply chain need to be accumulated in order to achieve a Return on Investment (ROI). In a long-term perspective, it may enable IT-departments to become profit centres instead of cost centres. © 2010 - IOS Press and the authors. All rights reserved.