57 resultados para Process-based model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The environmental fate of polybrominated diphenyl ethers (PBDEs), a group of flame retardants that are considered to be persistent organic pollutants (POPs), around the Zhuoshui River and Changhua County regions of Taiwan was assessed. An investigation into emissions, partitioning, and fate of selected PBDEs was conducted based on the equilibrium constant (EQC) fugacity model developed at Trent University, Canada. Emissions for congeners PBDE 47, PBDE 99, and PBDE 209 to air (4.9–92 × 10−3 kg/h), soil (0.91–17.4 × 10−3 kg/h), and water (0.21–4.04 × 10−3 kg/h), were estimated by modifying previous models on PBDE emission rates by considering both industrial and domestic rates. It was found that fugacity modeling can give a reasonable estimation of the behavior, partitioning, and concentrations of PBDE congeners in and around Taiwan. Results indicate that PBDE congeners have a high affinity for partitioning into sediments then soils. As congener number decreases, the PBDEs then partition more readily into air. As the degree of bromination increases, congeners more readily partition to sediments. Sediments may then act as a long-term source of PBDEs which can be released back into the water column due to resuspension during storm events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The channel-based model of duration perception postulates the existence of neural mechanisms that respond selectively to a narrow range of stimulus durations centred on their preferred duration (Heron et al Proceedings of the Royal Society B 279 690–698). In principle the channel-based model could
explain recent reports of adaptation-induced, visual duration compression effects (Johnston et al Current Biology 16 472–479; Curran and Benton Cognition 122 252–257); from this perspective duration compression is a consequence of the adapting stimuli being presented for a longer duration than the test stimuli. In the current experiment observers adapted to a sequence of moving random dot patterns at the same retinal position, each 340ms in duration and separated by a variable (500–1000ms) interval. Following adaptation observers judged the duration of a 600ms test stimulus at the same location. The test stimulus moved in the same, or opposite, direction as the adaptor. Contrary to the channel-based
model’s prediction, test stimulus duration appeared compressed, rather than expanded, when it moved in the same direction as the adaptor. That test stimulus duration was not distorted when moving in the opposite direction further suggests that visual timing mechanisms are influenced by additional neural processing associated with the stimulus being timed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data identification is a key task for any Internet Service Provider (ISP) or network administrator. As port fluctuation and encryption become more common in P2P traffic wishing to avoid identification, new strategies must be developed to detect and classify such flows. This paper introduces a new method of separating P2P and standard web traffic that can be applied as part of a data mining process, based on the activity of the hosts on the network. Unlike other research, our method is aimed at classifying individual flows rather than just identifying P2P hosts or ports. Heuristics are analysed and a classification system proposed. The accuracy of the system is then tested using real network traffic from a core internet router showing over 99% accuracy in some cases. We expand on this proposed strategy to investigate its application to real-time, early classification problems. New proposals are made and the results of real-time experiments compared to those obtained in the data mining research. To the best of our knowledge this is the first research to use host based flow identification to determine a flows application within the early stages of the connection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a recently published study, Sloutsky and Fisher [Sloutsky, V. M., & Fisher, A.V. (2004a). When development and learning decrease memory: Evidence against category-based induction in children. Psychological Science, 15, 553-558; Sloutsky, V. M., & Fisher, A. V. (2004b). Induction and categorization in young children: A similarity-based model. Journal of Experimental Psychology: General, 133, 166-188.] demonstrated that children have better memory for the items that they generalise to than do adults. On the basis of this finding, they claim that children and adults use different mechanisms for inductive generalisations;whereas adults focus on shared category membership, children project properties on the basis of perceptual similarity. Sloutsky & Fisher attribute children's enhanced recognition memory to the more detailed processing required by this similarity-based mechanism. In Experiment I we show that children look at the stimulus items for longer than adults. In Experiment 2 we demonstrate that although when given just 250 ms to inspect the items children remain capable of making accurate inferences, their subsequent memory for those items decreases significantly. These findings suggest that there are no necessary conclusions to be drawn from Sloutsky & Fisher's results about developmental differences in generalisation strategy. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Homology modeling was used to build 3D models of the N-methyl-D-aspartate (NMDA) receptor glycine binding site on the basis of an X-ray structure of the water-soluble AMPA-sensitive receptor. The docking of agonists and antagonists to these models was used to reveal binding modes of ligands and to explain known structure-activity relationships. Two types of quantitative models, 3D-QSAR/CoMFA and a regression model based on docking energies, were built for antagonists (derivatives of 4-hydroxy-2-quinolone, quinoxaline-2,3-dione, and related compounds). The CoMFA steric and electrostatic maps were superimposed on the homology-based model, and a close correspondence was marked. The derived computational models have permitted the evaluation of the structural features crucial for high glycine binding site affinity and are important for the design of new ligands.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the ?rst systematic chronostratigraphic study of the river terraces of the Exe catchment in South West England and a new conceptual model for terrace formation in unglaciated basins with applicability to terrace staircase sequences elsewhere. The Exe catchment lay beyond the maximum extent of Pleistocene ice sheets and the drainage pattern evolved from the Tertiary to the Middle Pleistocene, by which time the major valley systems were in place and downcutting began to create a staircase of strath terraces. The higher terraces (8-6) typically exhibit altitudinal overlap or appear to be draped over the landscape, whilst the middle terraces show greater altitudinal separation and the lowest terraces are of a cut and ?ll form. The terrace deposits investigated in this study were deposited in cold phases of the glacial-interglacial Milankovitch climatic cycles with the lowest four being deposited in the Devensian Marine Isotope Stages (MIS) 4-2. A new cascade process-response model is proposed of basin terrace evolution in the Exe valley, which emphasises the role of lateral erosion in the creation of strath terraces and the reworking of inherited resistant lithological components down through the staircase. The resultant emergent valley topography and the reworking of artefacts along with gravel clasts, have important implications for the dating of hominin presence and the local landscapes they inhabited. Whilst the terrace chronology suggested here is still not as detailed as that for the Thames or the Solent System it does indicate a Middle Palaeolithic hominin presence in the region, probably prior to the late Wolstonian Complex or MIS 6. This supports existing data from cave sites in South West England.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the first launch of the new engineering contract (NEC) in 1993, early warning of problems has been widely recognized as an important approach of proactive management during a construction or engineering project. Is early warning really effective for the improvement of problem solving and project performance? This is a research question that still lacks a good answer. For this reason, an empirical investigation was made in the United Kingdom (U.K.) to answer the question. This study adopts a combination of literature review, expert interview, and questionnaire survey. Nearly 100 questionnaire responses were collected from the U.K. construction industry, based on which the use of early warning under different forms of contract is compared in this paper. Problem solving and project performance are further compared between the projects using early warning and the projects not using early warning. The comparison provides clear evidence for the significant effect of early warning on problem solving and project performance in terms of time, cost, and quality. Subsequently, an input-process-output model is developed in this paper to explore the relationship among early warning, problem solving, and project
performance. All these help construction researchers and practitioners to better understand the role of early warning in ensuring project success.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Over one billion children are exposed worldwide to political violence and armed conflict. Currently, conclusions about bases for adjustment problems are qualified by limited longitudinal research from a process-oriented, social-ecological perspective. In this study, we examined a theoretically-based model for the impact of multiple levels of the social ecology (family, community) on adolescent delinquency. Specifically, this study explored the impact of children’s emotional insecurity about both the family and community on youth delinquency in Northern Ireland. Methods: In the context of a five-wave longitudinal research design, participants included 999 mother-child dyads in Belfast (482 boys, 517 girls), drawn from socially-deprived, ethnically-homogenous areas that had experienced political violence. Youth ranged in age from 10 to 20 and were 12.18 (SD = 1.82) years old on average at Time 1. Findings: The longitudinal analyses were conducted in hierarchical linear modeling (HLM), allowing for the modeling of inter-individual differences in intra-individual change. Intra-individual trajectories of emotional insecurity about the family related to children’s delinquency. Greater insecurity about the community worsened the impact of family conflict on youth’s insecurity about the family, consistent with the notion that youth’s insecurity about the community sensitizes them to exposure to family conflict in the home. Conclusions: The results suggest that ameliorating children’s insecurity about family and community in contexts of political violence is an important goal toward improving adolescents’ well-being, including reduced risk for delinquency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Active radio-frequency identification systems that are used for the localisation and tracking of people will be subject to the same body centric processes that impact other forms of wearable communications. To achieve the goal of creating body worn tags with multiyear life spans, it will be necessary to gain an understanding of the channel conditions which are likely to impact the reader-tag interrogation process. In this paper we present the preliminary results of an indoor channel measurement campaign conducted at 868 MHz aimed at understanding and modelling signal characteristics for a wrist-worn tag. Using a model selection process based on the Akaike Information Criterion, the lognormal distribution was selected most often to describe the received signal amplitude. Parameter estimates are provided so that the channels investigated in this study may be readily simulated.