954 resultados para Model information
Resumo:
Primates must navigate complex social landscapes in their daily lives: gathering information from and about others, competing with others for food and mates, and cooperating to obtain rewards as well. Gaze-following often provides important clues as to what others see, know, or will do; using information about social attention is thus crucial for primates to be competent social actors. However, the cognitive bases of the gaze-following behaviors that primates exhibit appear to vary widely across species. The ultimate challenge of such analyses will therefore be to understand why such different cognitive mechanisms have evolved across species.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; American Psychiatric Association, 2000). The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs objective information about the trauma and peritraumatic emotions but uses retrospective memory reports that can have substantial biases. Negative events and emotions that do not satisfy the current diagnostic criteria for a trauma can be followed by symptoms that would otherwise qualify for PTSD. Predisposing factors that affect the current memory have large effects on symptoms. The inability-to-recall-an-important-aspect-of-the-trauma symptom does not correlate with other symptoms. Loss or enhancement of the trauma memory affects PTSD symptoms in predictable ways. Special mechanisms that apply only to traumatic memories are not needed, increasing parsimony and the knowledge that can be applied to understanding PTSD.
Resumo:
BACKGROUND: A hierarchical taxonomy of organisms is a prerequisite for semantic integration of biodiversity data. Ideally, there would be a single, expansive, authoritative taxonomy that includes extinct and extant taxa, information on synonyms and common names, and monophyletic supraspecific taxa that reflect our current understanding of phylogenetic relationships. DESCRIPTION: As a step towards development of such a resource, and to enable large-scale integration of phenotypic data across vertebrates, we created the Vertebrate Taxonomy Ontology (VTO), a semantically defined taxonomic resource derived from the integration of existing taxonomic compilations, and freely distributed under a Creative Commons Zero (CC0) public domain waiver. The VTO includes both extant and extinct vertebrates and currently contains 106,947 taxonomic terms, 22 taxonomic ranks, 104,736 synonyms, and 162,400 cross-references to other taxonomic resources. Key challenges in constructing the VTO included (1) extracting and merging names, synonyms, and identifiers from heterogeneous sources; (2) structuring hierarchies of terms based on evolutionary relationships and the principle of monophyly; and (3) automating this process as much as possible to accommodate updates in source taxonomies. CONCLUSIONS: The VTO is the primary source of taxonomic information used by the Phenoscape Knowledgebase (http://phenoscape.org/), which integrates genetic and evolutionary phenotype data across both model and non-model vertebrates. The VTO is useful for inferring phenotypic changes on the vertebrate tree of life, which enables queries for candidate genes for various episodes in vertebrate evolution.
Resumo:
Understanding tumor vascular dynamics through parameters such as blood flow and oxygenation can yield insight into tumor biology and therapeutic response. Hyperspectral microscopy enables optical detection of hemoglobin saturation or blood velocity by either acquiring multiple images that are spectrally distinct or by rapid acquisition at a single wavelength over time. However, the serial acquisition of spectral images over time prevents the ability to monitor rapid changes in vascular dynamics and cannot monitor concurrent changes in oxygenation and flow rate. Here, we introduce snap shot-multispectral imaging (SS-MSI) for use in imaging the microvasculature in mouse dorsal-window chambers. By spatially multiplexing spectral information into a single-image capture, simultaneous acquisition of dynamic hemoglobin saturation and blood flow over time is achieved down to the capillary level and provides an improved optical tool for monitoring rapid in vivo vascular dynamics.
Resumo:
BACKGROUND: Patients, clinicians, researchers and payers are seeking to understand the value of using genomic information (as reflected by genotyping, sequencing, family history or other data) to inform clinical decision-making. However, challenges exist to widespread clinical implementation of genomic medicine, a prerequisite for developing evidence of its real-world utility. METHODS: To address these challenges, the National Institutes of Health-funded IGNITE (Implementing GeNomics In pracTicE; www.ignite-genomics.org ) Network, comprised of six projects and a coordinating center, was established in 2013 to support the development, investigation and dissemination of genomic medicine practice models that seamlessly integrate genomic data into the electronic health record and that deploy tools for point of care decision making. IGNITE site projects are aligned in their purpose of testing these models, but individual projects vary in scope and design, including exploring genetic markers for disease risk prediction and prevention, developing tools for using family history data, incorporating pharmacogenomic data into clinical care, refining disease diagnosis using sequence-based mutation discovery, and creating novel educational approaches. RESULTS: This paper describes the IGNITE Network and member projects, including network structure, collaborative initiatives, clinical decision support strategies, methods for return of genomic test results, and educational initiatives for patients and providers. Clinical and outcomes data from individual sites and network-wide projects are anticipated to begin being published over the next few years. CONCLUSIONS: The IGNITE Network is an innovative series of projects and pilot demonstrations aiming to enhance translation of validated actionable genomic information into clinical settings and develop and use measures of outcome in response to genome-based clinical interventions using a pragmatic framework to provide early data and proofs of concept on the utility of these interventions. Through these efforts and collaboration with other stakeholders, IGNITE is poised to have a significant impact on the acceleration of genomic information into medical practice.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria, cabin crew training and in post mortuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. In this paper the capabilities and limitations of the airEXODUS evacuation model are described. Its successful application to the prediction of a recent certification trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described. Finally, the data requiremnets of the airEXODUS evacuation model is discussed along with several projects currently underway at the the Univesity of Greenwich designed to obtain this data. Included in this discussion is a description of the AASK - Aircraft Accident Statistics and Knowledge - data base which contains detailed information from aircraft accident survivors.
Resumo:
Belief revision is a well-research topic within AI. We argue that the new model of distributed belief revision as discussed here is suitable for general modelling of judicial decision making, along with extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents holding local beliefs yet by interaction with, and influencing, other agents who are deliberating collectively. In the approach proposed, it's the entire group of agents, not an external supervisor, who integrate the different opinions. This is achieved through an election mechanism, The principle of "priority to the incoming information" as known from AI models of belief revision are problematic, when applied to factfinding by a jury. The present approach incorporates a computable model for local belief revision, such that a principle of recoverability is adopted. By this principle, any previously held belief must belong to the current cognitive state if consistent with it. For the purposes of jury simulation such a model calls for refinement. Yet we claim, it constitutes a valid basis for an open system where other AI functionalities (or outer stiumuli) could attempt to handle other aspects of the deliberation which are more specifi to legal narrative, to argumentation in court, and then to the debate among the jurors.
Resumo:
For the purposes of starting to tackle, within artificial intelligence (AI), the narrative aspects of legal narratives in a criminal evidence perspective, traditional AI models of narrative understanding can arguably supplement extant models of legal narratives from the scholarly literature of law, jury studies, or the semiotics of law. Not only: the literary (or cinematic) models prominent in a given culture impinge, with their poetic conventions, on the way members of the culture make sense of the world. This shows glaringly in the sample narrative from the Continent-the Jama murder, the inquiry, and the public outcry-we analyse in this paper. Apparently in the same racist crime category as the case of Stephen Lawrence's murder (in Greenwich on 22 April 1993) with the ensuing still current controversy in the UK, the Jama case (some 20 years ago) stood apart because of a very unusual element: the eyewitnesses identifying the suspects were a group of football referees and linesmen eating together at a restaurant, and seeing the sleeping man as he was set ablaze in a public park nearby. Professional background as witnesses-cum-factfinders in a mass sport, and public perceptions of their required characteristics, couldn't but feature prominently in the public perception of the case, even more so as the suspects were released by the magistrate conducting the inquiry. There are sides to this case that involve different expected effects in an inquisitorial criminal procedure system from the Continent, where an investigating magistrate leads the inquiry and prepares the prosecution case, as opposed to trial by jury under the Anglo-American adversarial system. In the JAMA prototype, we tried to approach the given case from the coign of vantage of narrative models from AI.
Resumo:
Belief revision is a well-researched topic within Artificial Intelligence (AI). We argue that the new model of belief revision as discussed here is suitable for general modelling of judicial decision making, along with the extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents holding local beliefs yet by interacting with, and influencing, other agents who are deliberating collectively. The principle of 'priority to the incoming information', as known from AI models of belief revision, is problematic when applied to factfinding by a jury. The present approach incorporates a computable model for local belief revision, such that a principle of recoverability is adopted. By this principle, any previously held belief must belong to the current cognitive state if consistent with it. For the purposes of jury simulation such a model calls for refinement. Yet, we claim, it constitutes a valid basis for an open system where other AI functionalities (or outer stimuli) could attempt to handle other aspects of the deliberation which are more specific to legal narratives, to argumentation in court, and then to the debate among the jurors.
Resumo:
The recent history and current trends in the collection and archiving of forest information and models is reviewed. The question is posed as to whether the community of forest modellers ought to take some action in setting up a Forest Model Archive (FMA) as a means of conserving and sharing the heritage of forest models that have been developed over several decades. The paper discusses the various alternatives of what an FMA could be, and should be. It then goes on to formulate a conceptual model as the basis for the construction of a FMA. Finally the question of software architecture is considered. Again there are a number of possible solutions. We discuss the alternatives, some in considerable detail, but leave the final decisions on these issues to the forest modelling community. This paper has spawned the “Greenwich Initiative” on the FMA. An internet discussion group on the topic will be started and launched by the “Trafalar Group”, which will span both IUFRO 4.1 and 4.11, and further discussion is planned to take place at the Forest Modelling Conference in Portugal, June 2002.
Resumo:
We consider the optimum design of pilot-symbol-assisted modulation (PSAM) schemes with feedback. The received signal is periodically fed back to the transmitter through a noiseless delayed link and the time-varying channel is modeled as a Gauss-Markov process. We optimize a lower bound on the channel capacity which incorporates the PSAM parameters and Kalman-based channel estimation and prediction. The parameters available for the capacity optimization are the data power adaptation strategy, pilot spacing and pilot power ratio, subject to an average power constraint. Compared to the optimized open-loop PSAM (i.e., the case where no feedback is provided from the receiver), our results show that even in the presence of feedback delay, the optimized power adaptation provides higher information rates at low signal-to-noise ratios (SNR) in medium-rate fading channels. However, in fast fading channels, even the presence of modest feedback delay dissipates the advantages of power adaptation.
Resumo:
Hurricanes are destructive storms with strong winds, intense storm surges, and heavy rainfall. The resulting impact from a hurricane can include structural damage to buildings and infrastructure, flooding, and ultimately loss of human life. This paper seeks to identify the impact of Hurricane Ivan on the aected population of Grenada, one of the Caribbean islands. Hurricane Ivan made landfall on 7th September 2004 and resulted in 80% of the population being adversely aected. The methods that were used to model these impacts involved performing hazard and risk assessments using GIS and remote sensing techniques. Spatial analyses were used to create a hazard and a risk map. Hazards were identied initially as those caused by storm surges, severe winds speeds, and flooding events related to Hurricane Ivan. These estimated hazards were then used to create a risk map. An innovative approach was adopted, including the use of hillshading to assess the damage caused by high wind speeds. This paper explains in detail the methodology used and the results produced.
Resumo:
This paper provides mutual information performance analysis of multiple-symbol differential WSK (M-phase shift keying) over time-correlated, time-varying flat-fading communication channels. A state space approach is used to model time correlation of time varying channel phase. This approach captures the dynamics of time correlated, time-varying channels and enables exploitation of the forward-backward algorithm for mutual information performance analysis. It is shown that the differential decoding implicitly uses a sequence of innovations of the channel process time correlation and this sequence is essentially uncorrelated. It enables utilization of multiple-symbol differential detection, as a form of block-by-block maximum likelihood sequence detection for capacity achieving mutual information performance. It is shown that multiple-symbol differential ML detection of BPSK and QPSK practically achieves the channel information capacity with observation times only on the order of a few symbol intervals
Resumo:
This chapter focuses on what the key decision makers in organizations decide after having received information on the current state of the organizational performance. Because of strong attributions to success and failure, it is impossible to predict in advance which concrete actions will occur. We can however find out what kinds of actions are decided upon by means of an organizational learning model that focuses on the hastenings and delays after performance feedback. As an illustration, the responses to performance signals by trainers and club owners in Dutch soccer clubs are analyzed.