954 resultados para Model information
Resumo:
This work demonstrates an example of the importance of an adequate method to sub-sample model results when comparing with in situ measurements. A test of model skill was performed by employing a point-to-point method to compare a multi-decadal hindcast against a sparse, unevenly distributed historic in situ dataset. The point-to-point method masked out all hindcast cells that did not have a corresponding in situ measurement in order to match each in situ measurement against its most similar cell from the model. The application of the point-to-point method showed that the model was successful at reproducing the inter-annual variability of the in situ datasets. Furthermore, this success was not immediately apparent when the measurements were aggregated to regional averages. Time series, data density and target diagrams were employed to illustrate the impact of switching from the regional average method to the point-to-point method. The comparison based on regional averages gave significantly different and sometimes contradicting results that could lead to erroneous conclusions on the model performance. Furthermore, the point-to-point technique is a more correct method to exploit sparse uneven in situ data while compensating for the variability of its sampling. We therefore recommend that researchers take into account for the limitations of the in situ datasets and process the model to resemble the data as much as possible.
Resumo:
In this paper we clearly demonstrate that changes in oceanic nutrients are a first order factor in determining changes in the primary production of the northwest European continental shelf on time scales of 5–10 yr. We present a series of coupled hydrodynamic ecosystem modelling simulations, using the POLCOMS-ERSEM system. These are forced by both reanalysis data and a single example of a coupled ocean-atmosphere general circulation model (OA-GCM) representative of possible conditions in 2080–2100 under an SRES A1B emissions scenario, along with the corresponding present day control. The OA-GCM forced simulations show a substantial reduction in surface nutrients in the open-ocean regions of the model domain, comparing future and present day time-slices. This arises from a large increase in oceanic stratification. Tracer transport experiments identify a substantial fraction of on-shelf water originates from the open-ocean region to the south of the domain, where this increase is largest, and indeed the on-shelf nutrient and primary production are reduced as this water is transported on-shelf. This relationship is confirmed quantitatively by comparing changes in winter nitrate with total annual nitrate uptake. The reduction in primary production by the reduced nutrient transport is mitigated by on-shelf processes relating to temperature, stratification (length of growing season) and recycling. Regions less exposed to ocean-shelf exchange in this model (Celtic Sea, Irish Sea, English Channel, and Southern North Sea) show a modest increase in primary production (of 5–10%) compared with a decrease of 0–20% in the outer shelf, Central and Northern North Sea. These findings are backed up by a boundary condition perturbation experiment and a simple mixing model.
Resumo:
The ocean plays an important role in regulating the climate, acting as a sink for carbon dioxide, perturbing the carbonate system and resulting in a slow decrease of seawater pH. Understanding the dynamics of the carbonate system in shelf sea regions is necessary to evaluate the impact of Ocean Acidification (OA) in these societally important ecosystems. Complex hydrodynamic and ecosystem coupled models provide a method of capturing the significant heterogeneity of these areas. However rigorous validation is essential to properly assess the reliability of such models. The coupled model POLCOMS–ERSEM has been implemented in the North Western European shelf with a new parameterization for alkalinity explicitly accounting for riverine inputs and the influence of biological processes. The model has been validated in a like with like comparison with North Sea data from the CANOBA dataset. The model shows good to reasonable agreement for the principal variables, physical (temperature and salinity), biogeochemical (nutrients) and carbonate system (dissolved inorganic carbon and total alkalinity), but simulation of the derived variables, pH and pCO2, are not yet fully satisfactory. This high uncertainty is attributed mostly to riverine forcing and primary production. This study suggests that the model is a useful tool to provide information on Ocean Acidification scenarios, but uncertainty on pH and pCO2 needs to be reduced, particularly when impacts of OA on ecosystem functions are included in the model systems.
Resumo:
Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.
Resumo:
Fifty-two CFLP mice had an open femoral diaphyseal osteotomy held in compression by a four-pin external fixator. The movement of 34 of the mice in their cages was quantified before and after operation, until sacrifice at 4, 8, 16 or 24 days. Thirty-three specimens underwent histomorphometric analysis and 19 specimens underwent torsional stiffness measurement. The expected combination of intramembranous and endochondral bone formation was observed, and the model was shown to be reliable in that variation in the histological parameters of healing was small between animals at the same time point, compared to the variation between time-points. There was surprisingly large individual variation in the amount of animal movement about the cage, which correlated with both histomorphometric and mechanical measures of healing. Animals that moved more had larger external calluses containing more cartilage and demonstrated lower torsional stiffness at the same time point. Assuming that movement of the whole animal predicts, at least to some extent, movement at the fracture site, this correlation is what would be expected in a model that involves similar processes to those in human fracture healing. Models such as this, employed to determine the effect of experimental interventions, will yield more information if the natural variation in animal motion is measured and included in the analysis.
Resumo:
An analogy is established between the syntagm and paradigm from Saussurean linguistics and the message and messages for selection from the information theory initiated by Claude Shannon. The analogy is pursued both as an end itself and for its analytic value in understanding patterns of retrieval from full text systems. The multivalency of individual words when isolated from their syntagm is contrasted with the relative stability of meaning of multi-word sequences, when searching ordinary written discourse. The syntagm is understood as the linear sequence of oral and written language. Saussureâ??s understanding of the word, as a unit which compels recognition by the mind, is endorsed, although not regarded as final. The lesser multivalency of multi-word sequences is understood as the greater determination of signification by the extended syntagm. The paradigm is primarily understood as the network of associations a word acquires when considered apart from the syntagm. The restriction of information theory to expression or signals, and its focus on the combinatorial aspects of the message, is sustained. The message in the model of communication in information theory can include sequences of written language. Shannonâ??s understanding of the written word, as a cohesive group of letters, with strong internal statistical influences, is added to the Saussurean conception. Sequences of more than one word are regarded as weakly correlated concatenations of cohesive units.
Resumo:
This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.
Managing expectations and benefits: a model for electronic trading and EDI in the insurance industry
Resumo:
Model Driven Architecture supports the transformation from reusable models to executable software. Business representations, however, cannot be fully and explicitly represented in such models for direct transformation into running systems. Thus, once business needs change, the language abstractions used by MDA (e.g. Object Constraint Language / Action Semantics), being low level, have to be edited directly. We therefore describe an Agent-oriented Model Driven Architecture (AMDA) that uses a set of business models under continuous maintenance by business people, reflecting the current business needs and being associated with adaptive agents that interpret the captured knowledge to behave dynamically. Three contributions of the AMDA approach are identified: 1) to Agent-oriented Software Engineering, a method of building adaptive Multi-Agent Systems; 2) to MDA, a means of abstracting high level business-oriented models to align executable systems with their requirements at runtime; 3) to distributed systems, the interoperability of disparate components and services via the agent abstraction.
Resumo:
This paper discusses the approaches and techniques used to build a realistic numerical model to analyse the cooling phase of the injection moulding process. The procedures employed to select an appropriate mesh and the boundary and initial conditions for the problem are discussed and justified. The final model is validated using direct comparisons with experimental results generated in an earlier study. The model is shown to be a useful tool for further studies aimed at optimising the cooling phase of the injection moulding process. Using the numerical model provides additional information relating to changes in conditions throughout the process, which otherwise could not be deduced or assessed experimentally. These results, and other benefits related to the use of the model, are also discussed in the paper. © 2007 Elsevier B.V. All rights reserved.
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.
Resumo:
Annotation of programs using embedded Domain-Specific Languages (embedded DSLs), such as the program annotation facility for the Java programming language, is a well-known practice in computer science. In this paper we argue for and propose a specialized approach for the usage of embedded Domain-Specific Modelling Languages (embedded DSMLs) in Model-Driven Engineering (MDE) processes that in particular supports automated many-step model transformation chains. It can happen that information defined at some point, using an embedded DSML, is not required in the next immediate transformation step, but in a later one. We propose a new approach of model annotation enabling flexible many-step transformation chains. The approach utilizes a combination of embedded DSMLs, trace models and a megamodel. We demonstrate our approach based on an example MDE process and an industrial case study.