989 resultados para Pattern oriented modelling
Resumo:
In mantle convection models it has become common to make use of a modified (pressure sensitive, Boussinesq) von Mises yield criterion to limit the maximum stress the lithosphere can support. This approach allows the viscous, cool thermal boundary layer to deform in a relatively plate-like mode even in a fully Eulerian representation. In large-scale models with embedded continental crust where the mobile boundary layer represents the oceanic lithosphere, the von Mises yield criterion for the oceans ensures that the continents experience a realistic broad-scale stress regime. In detailed models of crustal deformation it is, however, more appropriate to choose a Mohr-Coulomb yield criterion based upon the idea that frictional slip occurs on whichever one of many randomly oriented planes happens to be favorably oriented with respect to the stress field. As coupled crust/mantle models become more sophisticated it is important to be able to use whichever failure model is appropriate to a given part of the system. We have therefore developed a way to represent Mohr-Coulomb failure within a code which is suited to mantle convection problems coupled to large-scale crustal deformation. Our approach uses an orthotropic viscous rheology (a different viscosity for pure shear to that for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation - neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane which is formally equivalent in any symmetric formulation. It is also important to allow strain-weakening of the material. The material should remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-Integration-point finite element code and will show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion, and comparisons between mantle convection models using the von Mises versus the Mohr-Coulomb yield criteria. The formulation itself is general and applies to 2D and 3D problems, although it is somewhat more complicated to identify the slip plane in 3D.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
We studied the visual mechanisms that serve to encode spatial contrast at threshold and supra-threshold levels. In a 2AFC contrast-discrimination task, observers had to detect the presence of a vertical 1 cycle deg-1 test grating (of contrast dc) that was superimposed on a similar vertical 1 cycle deg-1 pedestal grating, whereas in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 cycle deg-1, or oblique 3 cycles deg-1). When expressed as threshold contrast (dc at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for contrast discrimination but a smoothly increasing threshold for pattern masking. However, fresh insight is gained by analysing and modelling performance (p; percent correct) as a joint function of (c, dc) - the performance surface. In contrast discrimination, psychometric functions (p versus logdc) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain-control model with six free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance. Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes. The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (dc) is linearised by the presence of pedestal contrasts but remains nonlinear in pattern masking.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
This thesis presents the results of numerical modelling of the propagation of dispersion managed solitons. The theory of optical pulse propagation in single mode optical fibre is introduced specifically looking at the use of optical solitons for fibre communications. The numerical technique used to solve the nonlinear Schrödinger equation is also introduced. The recent developments in the use of dispersion managed solitons are reviewed before the numerical results are presented. The work in this thesis covers two main areas; (i) the use of a saturable absorber to control the propagation of dispersion managed solutions and (ii) the upgrade of the installed standard fibre network to higher data rates through the use of solitons and dispersion management. Saturable absorbe can be used to suppress the build up of noise and dispersive radiation in soliton transmission lines. The use of saturable absorbers in conjunction with dispersion management has been investigated both as a single pulse and for the transmission of a 10Gbit/s data pattern. It is found that this system supports a new regime of stable soliton pulses with significantly increased powers. The upgrade of the installed standard fibre network to higher data rates through the use of fibre amplifiers and dispersion management is of increasing interest. In this thesis the propagation of data at both 10Gbit/s and 40Gbit/s is studied. Propagation over transoceanic distances is shown to be possible for 10Gbit/s transmission and for more than 2000km at 40Gbit/s. The contribution of dispersion managed solitons in the future of optical communications is discussed in the thesis conclusions.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
The literature pertaining to the key stages of spray drying has been reviewed in the context of the mathematical modelling of drier performance. A critical review is also presented of previous spray drying models. A new mathematical model has been developed for prediction of spray drier performance. This is applicable to slurries of rigid, porous crust-forming materials to predict trajectories and drying profiles for droplets with a distribution of sizes sprayed from a centrifugal pressure nozzle. The model has been validated by comparing model predictions to experimental data from a pilot-scale counter-current drier and from a full-scale co-current drier. For the latter, the computed product moisture content was within 2%, and the computed air exit temperature within 10oC of experimental data. Air flow patterns have been investigated in a 1.2m diameter transparent countercurrent spray tower by flow visualisation. Smoke was introduced into various zones within the tower to trace the direction, and gauge the intensity, of the air flow. By means of a set of variable-angle air inlet nozzles, a variety of air entry configurations was investigated. The existence of a core of high rotational and axial velocity channelling up the axis of the tower was confirmed. The stability of flow within the core was found to be strongly dependent upon the air entry arrangement. A probe was developed for the measurement of air temperature and humidity profiles. This was employed for studying evaporation of pure water drops in a 1.2m diameter pilot-scale counter-current drier. A rapid approach to the exit air properties was detected within a 1m distance from the air entry ports. Measured radial profiles were found to be virtually flat but, from the axial profiles, the existence of plug-flow, well-mixed-flow and some degree of air short-circuiting can be inferred. The model and conclusions should assist in the improved design and optimum operation of industrial spray driers.
Resumo:
Researchers are beginning to recognise that organisations often have different levels of market orientation across different aspects of their operations. Focusing on firms involved in export marketing, this study examines how market-oriented behaviour differs across firms' domestic and export marketing operations. In this respect, the study is the first of its kind since it investigates three main issues: (1) to what extent do differences exist in firms' levels of market-oriented behaviour in their domestic markets (i.e., their domestic market-oriented behaviour) and in their export markets (i.e., their export market-oriented behaviour), (2) what are the key drivers of such differences, and (3) what are the performance implications for firms of having different levels of domestic and export market-oriented behaviour. To shed light on these research questions, data were collected from 225 British exporting firms using a mail questionnaire. Structural equation modelling techniques were used to develop and purify measures of all construct of interest, and to test the theoretical models developed. The results indicate that many of businesses sampled have very different levels of market orientation in their domestic and exporting operations: typically, firms tend to be more market-oriented in their domestic markets relative to their export markets. Several key factors were identified as drivers of differences in market orientation levels across firms' domestic and export markets. In particular, it was found that differences were more pronounced when: (i) interfunctional interactions between domestic marketing and export marketing are rare, (ii) when domestic and export marketing follow asymmetric business strategies, (iii) when mutual dependence between the functions is low, (iv) when one or other of the functions dominates the firm's sales, and (v) when there are pronounced differences in the degree to which the domestic and the export markets are experiencing environmental turbulence. The consequences of differences in market-oriented behaviour across firms' domestic and export markets were also studied. The results indicate that overall sales performance of firms (as determined by the composite of firms' domestic sales and export sales performance) is positively related to levels of domestic market-oriented behaviour under high levels of environmental turbulence in firms' domestic markets. However, as domestic market turbulence decreases, so to does the strength of this positive relationship. On the other hand, export market-oriented behaviour provides a positive contribution to firms' overall sales success under conditions of relatively low export market turbulence. As the turbulence in export markets increases, this positive relationship becomes weaker. These findings indicate that there are numerous situations in which it is sub-optimal for firms to have identical levels of market-oriented behaviour in their domestic and exporting operations. The theoretical and practical implications of these findings are discussed.
Resumo:
A generalized systematic description of the Two-Wave Mixing (TWM) process in sillenite crystals allowing for arbitrary orientation of the grating vector is presented. An analytical expression for the TWM gain is obtained for the special case of plane waves in a thin crystal (|g|d«1) with large optical activity (|g|/?«1, g is the coupling constant, ? the rotatory power, d the crystal thickness). Using a two-dimensional formulation the scope of the nonlinear equations describing TWM can be extended to finite beams in arbitrary geometries and to any crystal parameters. Two promising applications of this formulation are proposed. The polarization dependence of the TWM gain is used for the flattening of Gaussian beam profiles without expanding them. The dependence of the TWM gain on the interaction length is used for the determination of the crystal orientation. Experiments carried out on Bi12GeO20 crystals of a non-standard cut are in good agreement with the results of modelling.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
Visual perception begins by dissecting the retinal image into millions of small patches for local analyses by local receptive fields. However, image structures extend well beyond these receptive fields and so further processes must be involved in sewing the image fragments back together to derive representations of higher order (more global) structures. To investigate the integration process, we also need to understand the opposite process of suppression. To investigate both processes together, we measured triplets of dipper functions for targets and pedestals involving interdigitated stimulus pairs (A, B). Previous work has shown that summation and suppression operate over the full contrast range for the domains of ocularity and space. Here, we extend that work to include orientation and time domains. Temporal stimuli were 15-Hz counter-phase sine-wave gratings, where A and B were the positive and negative phases of the oscillation, respectively. For orientation, we used orthogonally oriented contrast patches (A, B) whose sum was an isotropic difference of Gaussians. Results from all four domains could be understood within a common framework in which summation operates separately within the numerator and denominator of a contrast gain control equation. This simple arrangement of summation and counter-suppression achieves integration of various stimulus attributes without distorting the underlying contrast code.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
A generalized systematic description of the Two-Wave Mixing (TWM) process in sillenite crystals allowing for arbitrary orientation of the grating vector is presented. An analytical expression for the TWM gain is obtained for the special case of plane waves in a thin crystal (|g|d«1) with large optical activity (|g|/?«1, g is the coupling constant, ? the rotatory power, d the crystal thickness). Using a two-dimensional formulation the scope of the nonlinear equations describing TWM can be extended to finite beams in arbitrary geometries and to any crystal parameters. Two promising applications of this formulation are proposed. The polarization dependence of the TWM gain is used for the flattening of Gaussian beam profiles without expanding them. The dependence of the TWM gain on the interaction length is used for the determination of the crystal orientation. Experiments carried out on Bi12GeO20 crystals of a non-standard cut are in good agreement with the results of modelling.
Resumo:
Over the last few years Data Envelopment Analysis (DEA) has been gaining increasing popularity as a tool for measuring efficiency and productivity of Decision Making Units (DMUs). Conventional DEA models assume non-negative inputs and outputs. However, in many real applications, some inputs and/or outputs can take negative values. Recently, Emrouznejad et al. [6] introduced a Semi-Oriented Radial Measure (SORM) for modelling DEA with negative data. This paper points out some issues in target setting with SORM models and introduces a modified SORM approach. An empirical study in bank sector demonstrates the applicability of the proposed model. © 2014 Elsevier Ltd. All rights reserved.