900 resultados para Field-based model
Resumo:
As field determinations take much effort, it would be useful to be able to predict easily the coefficients describing the functional response of free-living predators, the function relating food intake rate to the abundance of food organisms in the environment. As a means easily to parameterise an individual-based model of shorebird Charadriiformes populations, we attempted this for shorebirds eating macro-invertebrates. Intake rate is measured as the ash-free dry mass (AFDM) per second of active foraging; i.e. excluding time spent on digestive pauses and other activities, such as preening. The present and previous studies show that the general shape of the functional response in shorebirds eating approximately the same size of prey across the full range of prey density is a decelerating rise to a plateau, thus approximating the Holling type 11 ('disc equation') formulation. But field studies confirmed that the asymptote was not set by handling time, as assumed by the disc equation, because only about half the foraging time was spent in successfully or unsuccessfully attacking and handling prey, the rest being devoted to searching. A review of 30 functional responses showed that intake rate in free-living shorebirds varied independently of prey density over a wide range, with the asymptote being reached at very low prey densities (< 150/m(-2)). Accordingly, most of the many studies of shorebird intake rate have probably been conducted at or near the asymptote of the functional response, suggesting that equations that predict intake rate should also predict the asymptote. A multivariate analysis of 468 'spot' estimates of intake rates from 26 shorebirds identified ten variables, representing prey and shorebird characteristics, that accounted for 81 % of the variance in logarithm-transformed intake rate. But four-variables accounted for almost as much (77.3 %), these being bird size, prey size, whether the bird was an oystercatcher Haematopus ostralegus eating mussels Mytilus edulis, or breeding. The four variable equation under-predicted, on average, the observed 30 estimates of the asymptote by 11.6%, but this discrepancy was reduced to 0.2% when two suspect estimates from one early study in the 1960s were removed. The equation therefore predicted the observed asymptote very successfully in 93 % of cases. We conclude that the asymptote can be reliably predicted from just four easily measured variables. Indeed, if the birds are not breeding and are not oystercatchers eating mussels, reliable predictions can be obtained using just two variables, bird and prey sizes. A multivariate analysis of 23 estimates of the half-asymptote constant suggested they were smaller when prey were small but greater when the birds were large, especially in oystercatchers. The resulting equation could be used to predict the half-asymptote constant, but its predictive power has yet to be tested. As well as predicting the asymptote of the functional response, the equations will enable research workers engaged in many areas of shorebird ecology and behaviour to estimate intake rate without the need for conventional time-consuming field studies, including species for which it has not yet proved possible to measure intake rate in the field.
Resumo:
The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.
Resumo:
Domain specific information retrieval has become in demand. Not only domain experts, but also average non-expert users are interested in searching domain specific (e.g., medical and health) information from online resources. However, a typical problem to average users is that the search results are always a mixture of documents with different levels of readability. Non-expert users may want to see documents with higher readability on the top of the list. Consequently the search results need to be re-ranked in a descending order of readability. It is often not practical for domain experts to manually label the readability of documents for large databases. Computational models of readability needs to be investigated. However, traditional readability formulas are designed for general purpose text and insufficient to deal with technical materials for domain specific information retrieval. More advanced algorithms such as textual coherence model are computationally expensive for re-ranking a large number of retrieved documents. In this paper, we propose an effective and computationally tractable concept-based model of text readability. In addition to textual genres of a document, our model also takes into account domain specific knowledge, i.e., how the domain-specific concepts contained in the document affect the document’s readability. Three major readability formulas are proposed and applied to health and medical information retrieval. Experimental results show that our proposed readability formulas lead to remarkable improvements in terms of correlation with users’ readability ratings over four traditional readability measures.
Resumo:
In this paper we propose an alternative method for measuring efficiency of Decision making Units, which allows the presence of variables with both negative and positive values. The model is applied to data on the notional effluent processing system to compare the results with recent developed methods; Modified Slacks Based Model as suggested by Sharp et al (2007) and Range Directional Measures developed by Silva Portela et al (2004). A further example explores advantages of using the new model.
Resumo:
This work introduces a new variational Bayes data assimilation method for the stochastic estimation of precipitation dynamics using radar observations for short term probabilistic forecasting (nowcasting). A previously developed spatial rainfall model based on the decomposition of the observed precipitation field using a basis function expansion captures the precipitation intensity from radar images as a set of ‘rain cells’. The prior distributions for the basis function parameters are carefully chosen to have a conjugate structure for the precipitation field model to allow a novel variational Bayes method to be applied to estimate the posterior distributions in closed form, based on solving an optimisation problem, in a spirit similar to 3D VAR analysis, but seeking approximations to the posterior distribution rather than simply the most probable state. A hierarchical Kalman filter is used to estimate the advection field based on the assimilated precipitation fields at two times. The model is applied to tracking precipitation dynamics in a realistic setting, using UK Met Office radar data from both a summer convective event and a winter frontal event. The performance of the model is assessed both traditionally and using probabilistic measures of fit based on ROC curves. The model is shown to provide very good assimilation characteristics, and promising forecast skill. Improvements to the forecasting scheme are discussed
Resumo:
This thesis explores the processes of team innovation. It utilises two studies, an organisationally based pilot and an experimental study, to examine and identify aspects of teams' behaviours that are important for successful innovative outcome. The pilot study, based in two automotive manufacturers, involved the collection of team members' experiences through semi-structured interviews, and identified a number of factors that affected teams' innovative performance. These included: the application of ideative & dissemination processes; the importance of good team relationships, especially those of a more informal nature, in facilitating information and ideative processes; the role of external linkages in enhancing quality and radicality of innovations; and the potential attenuation of innovative ideas by time deadlines. This study revealed a number key team behaviours that may be important in successful innovation outcomes. These included; goal setting, idea generation and development, external contact, task and personal information exchange, leadership, positive feedback and resource deployment. These behaviours formed the basis of a coding system used in the second part of the research. Building on the results from the field based research, an experimental study was undertaken to examine the behavioural differences between three groups of sixteen teams undertaking innovative an task to produce an anti-drugs poster. They were randomly assigned to one of three innovation category conditions suggested by King and Anderson (1990), emergent, imported and imposed. These conditions determined the teams level of access to additional information on previously successful campaigns and the degree of freedom they had with regarding to the design of the poster. In addition, a further experimental condition was imposed on half of the teams per category which involved a formal time deadline for task completion. The teams were video taped for the duration of their innovation and their behaviours analysed and coded in five main aspects including; ideation, external focus, goal setting, interpersonal, directive and resource related activities. A panel of experts, utilising five scales developed from West and Anderson's (1996) innovation outcome measures, assessed the teams' outputs. ANOVAs and repeated measure ANOVAs were deployed to identify whether there were significant differences between the different conditions. The results indicated that there were some behavioural differences between the categories and that over the duration of the task behavioural changes were identified. The results, however, revealed a complex picture and suggested limited support for three distinctive innovation categories. There were many differences in behaviours, but rarely between more than two of the categories. A main finding was the impact that different levels of constraint had in changing teams' focus of attention. For example, emergent teams were found to use both their own team and external resources, whilst those who could import information about other successful campaigns were likely to concentrate outside the team and pay limited attention to the internal resources available within the team. In contrast, those operating under task constraints with aspects of the task imposed onto them were more likely to attend to internal team resources and pay limited attention to the external world. As indicated by the earlier field study, time deadlines did significantly change teams' behaviour, reducing ideative and information exchange behaviours. The model shows an important behavioural progression related to innovate teams. This progression involved the teams' openness initially to external sources, and then to the intra-team environment. Premature closure on the final idea before their mid-point was found to have a detrimental impact on team's innovation. Ideative behaviour per se was not significant for innovation outcome, instead the development of intra-team support and trust emerged as crucial. Analysis of variance revealed some limited differentiation between the behaviours of teams operating under the aforementioned three innovation categories. There were also distinct detrimental differences in the behaviour of those operating under a time deadline. Overall, the study identified the complex interrelationships of team behaviours and outcomes, and between teams and their context.
Resumo:
Due to the failure of PRARE the orbital accuracy of ERS-1 is typically 10-15 cm radially as compared to 3-4cm for TOPEX/Poseidon. To gain the most from these simultaneous datasets it is necessary to improve the orbital accuracy of ERS-1 so that it is commensurate with that of TOPEX/Poseidon. For the integration of these two datasets it is also necessary to determine the altimeter and sea state biases for each of the satellites. Several models for the sea state bias of ERS-1 are considered by analysis of the ERS-1 single satellite crossovers. The model adopted consists of the sea state bias as a percentage of the significant wave height, namely 5.95%. The removal of ERS-1 orbit error and recovery of an ERS-1 - TOPEX/Poseidon relative bias are both achieved by analysis of dual crossover residuals. The gravitational field based radial orbit error is modelled by a finite Fourier expansion series with the dominant frequencies determined by analysis of the JGM-2 co-variance matrix. Periodic and secular terms to model the errors due to atmospheric density, solar radiation pressure and initial state vector mis-modelling are also solved for. Validation of the dataset unification consists of comparing the mean sea surface topographies and annual variabilities derived from both the corrected and uncorrected ERS-1 orbits with those derived from TOPEX/Poseidon. The global and regional geographically fixed/variable orbit errors are also analysed pre and post correction, and a significant reduction is noted. Finally the use of dual/single satellite crossovers and repeat pass data, for the calibration of ERS-2 with respect to ERS-1 and TOPEX/Poseidon is shown by calculating the ERS-1/2 sea state and relative biases.
Resumo:
Physically based distributed models of catchment hydrology are likely to be made available as engineering tools in the near future. Although these models are based on theoretically acceptable equations of continuity, there are still limitations in the present modelling strategy. Of interest to this thesis are the current modelling assumptions made concerning the effects of soil spatial variability, including formations producing distinct zones of preferential flow. The thesis contains a review of current physically based modelling strategies and a field based assessment of soil spatial variability. In order to investigate the effects of soil nonuniformity a fully three dimensional model of variability saturated flow in porous media is developed. The model is based on a Galerkin finite element approximation to Richards equation. Accessibility to a vector processor permits numerical solutions on grids containing several thousand node points. The model is applied to a single hillslope segment under various degrees of soil spatial variability. Such variability is introduced by generating random fields of saturated hydraulic conductivity using the turning bands method. Similar experiments are performed under conditions of preferred soil moisture movement. The results show that the influence of soil variability on subsurface flow may be less significant than suggested in the literature, due to the integrating effects of three dimensional flow. Under conditions of widespread infiltration excess runoff, the results indicate a greater significance of soil nonuniformity. The recognition of zones of preferential flow is also shown to be an important factor in accurate rainfall-runoff modelling. Using the results of various fields of soil variability, experiments are carried out to assess the validity of the commonly used concept of `effective parameters'. The results of these experiments suggest that such a concept may be valid in modelling subsurface flow. However, the effective parameter is observed to be event dependent when the dominating mechanism is infiltration excess runoff.
Resumo:
This reported work significantly extends the reach of 10Gbit/s on-off keying singlemode fibre (SMF) transmission using full-field based electronic dispersion compensation (EDC) to 900 km. In addition, the EDC balances the complexity and the adaptation capability by employing a simple dispersive transmission line with static parameters for coarse dispersion compensation and 16-state maximum likelihood sequence estimation with Gaussian approximation based channel training for adaptive impairment trimming. Improved adaptation times of less than 400 ns for a bit error rate target of 10-3 over distances ranging from 0 to 900 km are reported.
Resumo:
We have attempted to bring together two areas which are challenging for both IS research and practice: forms of coordination and management of knowledge in the context of global, virtual software development projects. We developed a more comprehensive, knowledge-based model of how coordination can be achieved, and\illustrated the heuristic and explanatory power of the model when applied to global software projects experiencing different degrees of success. We first reviewed the literature on coordination and determined what is known about coordination of knowledge in global software projects. From this we developed a new, distinctive knowledge-based model of coordination, which was then employed to analyze two case studies of global software projects, at SAP and Baan, to illustrate the utility of the model.
Resumo:
Purpose: Short product life cycle and/or mass customization necessitate reconfiguration of operational enablers of supply chain (SC) from time to time in order to harness high levels of performance. The purpose of this paper is to identify the key operational enablers under stochastic environment on which practitioner should focus while reconfiguring a SC network. Design/methodology/approach: The paper used interpretive structural modeling (ISM) approach that presents a hierarchy-based model and the mutual relationships among the enablers. The contextual relationship needed for developing structural self-interaction matrix (SSIM) among various enablers is realized by conducting experiments through simulation of a hypothetical SC network. Findings: The research identifies various operational enablers having a high driving power towards assumed performance measures. In this regard, these enablers require maximum attention and of strategic importance while reconfiguring SC. Practical implications: ISM provides a useful tool to the SC managers to strategically adopt and focus on the key enablers which have comparatively greater potential in enhancing the SC performance under given operational settings. Originality/value: The present research realizes the importance of SC flexibility under the premise of reconfiguration of the operational units in order to harness high value of SC performance. Given the resulting digraph through ISM, the decision maker can focus the key enablers for effective reconfiguration. The study is one of the first efforts that develop contextual relations among operational enablers for SSIM matrix through integration of discrete event simulation to ISM. © Emerald Group Publishing Limited.
Resumo:
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.
Resumo:
Epilepsy is one of the most common neurological disorders, a large fraction of which is resistant to pharmacotherapy. In this light, understanding the mechanisms of epilepsy and its intractable forms in particular could create new targets for pharmacotherapeutic intervention. The current project explores the dynamic changes in neuronal network function in the chronic temporal lobe epilepsy (TLE) in rat and human brain in vitro. I focused on the process of establishment of epilepsy (epileptogenesis) in the temporal lobe. Rhythmic behaviour of the hippocampal neuronal networks in healthy animals was explored using spontaneous oscillations in the gamma frequency band (SγO). The use of an improved brain slice preparation technique resulted in the natural occurence (in the absence of pharmacological stimulation) of rhythmic activity, which was then pharmacologically characterised and compared to other models of gamma oscillations (KA- and CCh-induced oscillations) using local field potential recording technique. The results showed that SγO differed from pharmacologically driven models, suggesting higher physiological relevance of SγO. Network activity was also explored in the medial entorhinal cortex (mEC), where spontaneous slow wave oscillations (SWO) were detected. To investigate the course of chronic TLE establishment, a refined Li-pilocarpine-based model of epilepsy (RISE) was developed. The model significantly reduced animal mortality and demonstrated reduced intensity, yet high morbidy with almost 70% mean success rate of developing spontaneous recurrent seizures. We used SγO to characterize changes in the hippocampal neuronal networks throughout the epileptogenesis. The results showed that the network remained largely intact, demonstrating the subtle nature of the RISE model. Despite this, a reduction in network activity was detected during the so-called latent (no seizure) period, which was hypothesized to occur due to network fragmentation and an abnormal function of kainate receptors (KAr). We therefore explored the function of KAr by challenging SγO with kainic acid (KA). The results demonstrated a remarkable decrease in KAr response during the latent period, suggesting KAr dysfunction or altered expression, which will be further investigated using a variety of electrophysiological and immunocytochemical methods. The entorhinal cortex, together with the hippocampus, is known to play an important role in the TLE. Considering this, we investigated neuronal network function of the mEC during epileptogenesis using SWO. The results demonstrated a striking difference in AMPAr function, with possible receptor upregulation or abnormal composition in the early development of epilepsy. Alterations in receptor function inevitably lead to changes in the network function, which may play an important role in the development of epilepsy. Preliminary investigations were made using slices of human brain tissue taken following surgery for intratctable epilepsy. Initial results showed that oscillogenesis could be induced in human brain slices and that such network activity was pharmacologically similar to that observed in rodent brain. Overall, our findings suggest that excitatory glutamatergic transmission is heavily involved in the process of epileptogenesis. Together with other types of receptors, KAr and AMPAr contribute to epilepsy establishment and may be the key to uncovering its mechanism.
Resumo:
Linguistic theory, cognitive, information, and mathematical modeling are all useful while we attempt to achieve a better understanding of the Language Faculty (LF). This cross-disciplinary approach will eventually lead to the identification of the key principles applicable in the systems of Natural Language Processing. The present work concentrates on the syntax-semantics interface. We start from recursive definitions and application of optimization principles, and gradually develop a formal model of syntactic operations. The result – a Fibonacci- like syntactic tree – is in fact an argument-based variant of the natural language syntax. This representation (argument-centered model, ACM) is derived by a recursive calculus that generates a mode which connects arguments and expresses relations between them. The reiterative operation assigns primary role to entities as the key components of syntactic structure. We provide experimental evidence in support of the argument-based model. We also show that mental computation of syntax is influenced by the inter-conceptual relations between the images of entities in a semantic space.
Resumo:
This reported work significantly extends the reach of 10Gbit/s on-off keying singlemode fibre (SMF) transmission using full-field based electronic dispersion compensation (EDC) to 900 km. In addition, the EDC balances the complexity and the adaptation capability by employing a simple dispersive transmission line with static parameters for coarse dispersion compensation and 16-state maximum likelihood sequence estimation with Gaussian approximation based channel training for adaptive impairment trimming. Improved adaptation times of less than 400 ns for a bit error rate target of 10-3 over distances ranging from 0 to 900 km are reported.