926 resultados para Business Value Two-Layer Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso da técnica da camada equivalente na interpolação de dados de campo potencial permite levar em consideração que a anomalia, gravimétrica ou magnética, a ser interpolada é uma função harmônica. Entretanto, esta técnica tem aplicação computacional restrita aos levantamentos com pequeno número de dados, uma vez que ela exige a solução de um problema de mínimos quadrados com ordem igual a este número. Para viabilizar a aplicação da técnica da camada equivalente aos levantamentos com grande número de dados, nós desenvolvemos o conceito de observações equivalentes e o método EGTG, que, respectivamente, diminui a demanda em memória do computador e otimiza as avaliações dos produtos internos inerentes à solução dos problemas de mínimos quadrados. Basicamente, o conceito de observações equivalentes consiste em selecionar algumas observações, entre todas as observações originais, tais que o ajuste por mínimos quadrados, que ajusta as observações selecionadas, ajusta automaticamente (dentro de um critério de tolerância pré-estabelecido) todas as demais que não foram escolhidas. As observações selecionadas são denominadas observações equivalentes e as restantes são denominadas observações redundantes. Isto corresponde a partir o sistema linear original em dois sistemas lineares com ordens menores. O primeiro com apenas as observações equivalentes e o segundo apenas com as observações redundantes, de tal forma que a solução de mínimos quadrados, obtida a partir do primeiro sistema linear, é também a solução do segundo sistema. Este procedimento possibilita ajustar todos os dados amostrados usando apenas as observações equivalentes (e não todas as observações originais) o que reduz a quantidade de operações e a utilização de memória pelo computador. O método EGTG consiste, primeiramente, em identificar o produto interno como sendo uma integração discreta de uma integral analítica conhecida e, em seguida, em substituir a integração discreta pela avaliação do resultado da integral analítica. Este método deve ser aplicado quando a avaliação da integral analítica exigir menor quantidade de cálculos do que a exigida para computar a avaliação da integral discreta. Para determinar as observações equivalentes, nós desenvolvemos dois algoritmos iterativos denominados DOE e DOEg. O primeiro algoritmo identifica as observações equivalentes do sistema linear como um todo, enquanto que o segundo as identifica em subsistemas disjuntos do sistema linear original. Cada iteração do algoritmo DOEg consiste de uma aplicação do algoritmo DOE em uma partição do sistema linear original. Na interpolação, o algoritmo DOE fornece uma superfície interpoladora que ajusta todos os dados permitindo a interpolação na forma global. O algoritmo DOEg, por outro lado, otimiza a interpolação na forma local uma vez que ele emprega somente as observações equivalentes, em contraste com os algoritmos existentes para a interpolação local que empregam todas as observações. Os métodos de interpolação utilizando a técnica da camada equivalente e o método da mínima curvatura foram comparados quanto às suas capacidades de recuperar os valores verdadeiros da anomalia durante o processo de interpolação. Os testes utilizaram dados sintéticos (produzidos por modelos de fontes prismáticas) a partir dos quais os valores interpolados sobre a malha regular foram obtidos. Estes valores interpolados foram comparados com os valores teóricos, calculados a partir do modelo de fontes sobre a mesma malha, permitindo avaliar a eficiência do método de interpolação em recuperar os verdadeiros valores da anomalia. Em todos os testes realizados o método da camada equivalente recuperou mais fielmente o valor verdadeiro da anomalia do que o método da mínima curvatura. Particularmente em situações de sub-amostragem, o método da mínima curvatura se mostrou incapaz de recuperar o valor verdadeiro da anomalia nos lugares em que ela apresentou curvaturas mais pronunciadas. Para dados adquiridos em níveis diferentes o método da mínima curvatura apresentou o seu pior desempenho, ao contrário do método da camada equivalente que realizou, simultaneamente, a interpolação e o nivelamento. Utilizando o algoritmo DOE foi possível aplicar a técnica da camada equivalente na interpolação (na forma global) dos 3137 dados de anomalia ar-livre de parte do levantamento marinho Equant-2 e 4941 dados de anomalia magnética de campo total de parte do levantamento aeromagnético Carauari-Norte. Os números de observações equivalentes identificados em cada caso foram, respectivamente, iguais a 294 e 299. Utilizando o algoritmo DOEg nós otimizamos a interpolação (na forma local) da totalidade dos dados de ambos os levantamentos citados. Todas as interpolações realizadas não seriam possíveis sem a aplicação do conceito de observações equivalentes. A proporção entre o tempo de CPU (rodando os programas no mesmo espaço de memória) gasto pelo método da mínima curvatura e pela camada equivalente (interpolação global) foi de 1:31. Esta razão para a interpolação local foi praticamente de 1:1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a generalized two-species population dynamic model and analytically solve it for the amensalism and commensalism ecological interactions. These two-species models can be simplified to a one-species model with a time dependent extrinsic growth factor. With a one-species model with an effective carrying capacity one is able to retrieve the steady state solutions of the previous one-species model. The equivalence obtained between the effective carrying capacity and the extrinsic growth factor is complete only for a particular case, the Gompertz model. Here we unveil important aspects of sigmoid growth curves, which are relevant to growth processes and population dynamics. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Driven by privacy-related fears, users of Online Social Networks may start to reduce their network activities. This trend can have a negative impact on network sustainability and its business value. Nevertheless, very little is understood about the privacy-related concerns of users and the impact of those concerns on identity performance. To close this gap, we take a systematic view of user privacy concerns on such platforms. Based on insights from focus groups and an empirical study with 210 subjects, we find that (i) Organizational Threats and (ii) Social Threats stemming from the user environment constitute two underlying dimensions of the construct “Privacy Concerns in Online Social Networks”. Using a Structural Equation Model, we examine the impact of the identified dimensions of concern on the Amount, Honesty, and Conscious Control of individual self-disclosure on these sites. We find that users tend to reduce the Amount of information disclosed as a response to their concerns regarding Organizational Threats. Additionally, users become more conscious about the information they reveal as a result of Social Threats. Network providers may want to develop specific mechanisms to alleviate identified user concerns and thereby ensure network sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many field or laboratory situations, well-mixed reservoirs like, for instance, injection or detection wells and gas distribution or sampling chambers define boundaries of transport domains. Exchange of solutes or gases across such boundaries can occur through advective or diffusive processes. First we analyzed situations, where the inlet region consists of a well-mixed reservoir, in a systematic way by interpreting them in terms of injection type. Second, we discussed the mass balance errors that seem to appear in case of resident injections. Mixing cells (MC) can be coupled mathematically in different ways to a domain where advective-dispersive transport occurs: by assuming a continuous solute flux at the interface (flux injection, MC-FI), or by assuming a continuous resident concentration (resident injection). In the latter case, the flux leaving the mixing cell can be defined in two ways: either as the value when the interface is approached from the mixing-cell side (MC-RT -), or as the value when it is approached from the column side (MC-RT +). Solutions of these injection types with constant or-in one case-distance-dependent transport parameters were compared to each other as well as to a solution of a two-layer system, where the first layer was characterized by a large dispersion coefficient. These solutions differ mainly at small Peclet numbers. For most real situations, the model for resident injection MC-RI + is considered to be relevant. This type of injection was modeled with a constant or with an exponentially varying dispersion coefficient within the porous medium. A constant dispersion coefficient will be appropriate for gases because of the Eulerian nature of the usually dominating gaseous diffusion coefficient, whereas the asymptotically growing dispersion coefficient will be more appropriate for solutes due to the Lagrangian nature of mechanical dispersion, which evolves only with the fluid flow. Assuming a continuous resident concentration at the interface between a mixing cell and a column, as in case of the MC-RI + model, entails a flux discontinuity. This flux discontinuity arises inherently from the definition of a mixing cell: the mixing process is included in the balance equation, but does not appear in the description of the flux through the mixing cell. There, only convection appears because of the homogeneous concentration within the mixing cell. Thus, the solute flux through a mixing cell in close contact with a transport domain is generally underestimated. This leads to (apparent) mass balance errors, which are often reported for similar situations and erroneously used to judge the validity of such models. Finally, the mixing cell model MC-RI + defines a universal basis regarding the type of solute injection at a boundary. Depending on the mixing cell parameters, it represents, in its limits, flux as well as resident injections. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Princeton Ocean Model is used to study the circulation features in the Pearl River Estuary and their responses to tide, river discharge, wind, and heat flux in the winter dry and summer wet seasons. The model has an orthogonal curvilinear grid in the horizontal plane with variable spacing from 0.5 km in the estuary to 1 km on the shelf and 15 sigma levels in the vertical direction. The initial conditions and the subtidal open boundary forcing are obtained from an associated larger-scale model of the northern South China Sea. Buoyancy forcing uses the climatological monthly heat fluxes and river discharges, and both the climatological monthly wind and the realistic wind are used in the sensitivity experiments. The tidal forcing is represented by sinusoidal functions with the observed amplitudes and phases. In this paper, the simulated tide is first examined. The simulated seasonal distributions of the salinity, as well as the temporal variations of the salinity and velocity over a tidal cycle are described and then compared with the in situ survey data from July 1999 and January 2000. The model successfully reproduces the main hydrodynamic processes, such as the stratification, mixing, frontal dynamics, summer upwelling, two-layer gravitational circulation, etc., and the distributions of hydrodynamic parameters in the Pearl River Estuary and coastal waters for both the winter and the summer season.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As ações de maior liquidez do índice IBOVESPA, refletem o comportamento das ações de um modo geral, bem como a relação das variáveis macroeconômicas em seu comportamento e estão entre as mais negociadas no mercado de capitais brasileiro. Desta forma, pode-se entender que há reflexos de fatores que impactam as empresas de maior liquidez que definem o comportamento das variáveis macroeconômicas e que o inverso também é uma verdade, oscilações nos fatores macroeconômicos também afetam as ações de maior liquidez, como IPCA, PIB, SELIC e Taxa de Câmbio. O estudo propõe uma análise da relação existente entre variáveis macroeconômicas e o comportamento das ações de maior liquidez do índice IBOVESPA, corroborando com estudos que buscam entender a influência de fatores macroeconômicos sobre o preço de ações e contribuindo empiricamente com a formação de portfólios de investimento. O trabalho abrangeu o período de 2008 a 2014. Os resultados concluíram que a formação de carteiras, visando a proteção do capital investido, deve conter ativos com correlação negativa em relação às variáveis estudadas, o que torna possível a composição de uma carteira com risco reduzido.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A “most probable state” equilibrium statistical theory for random distributions of hetons in a closed basin is developed here in the context of two-layer quasigeostrophic models for the spreading phase of open-ocean convection. The theory depends only on bulk conserved quantities such as energy, circulation, and the range of values of potential vorticity in each layer. The simplest theory is formulated for a uniform cooling event over the entire basin that triggers a homogeneous random distribution of convective towers. For a small Rossby deformation radius typical for open-ocean convection sites, the most probable states that arise from this theory strongly resemble the saturated baroclinic states of the spreading phase of convection, with a stabilizing barotropic rim current and localized temperature anomaly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"First Printing: January 2000."--p. [ii].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

0We study the exact solution for a two-mode model describing coherent coupling between atomic and molecular Bose-Einstein condensates (BEC), in the context of the Bethe ansatz. By combining an asymptotic and numerical analysis, we identify the scaling behaviour of the model and determine the zero temperature expectation value for the coherence and average atomic occupation. The threshold coupling for production of the molecular BEC is identified as the point at which the energy gap is minimum. Our numerical results indicate a parity effect for the energy gap between ground and first excited state depending on whether the total atomic number is odd or even. The numerical calculations for the quantum dynamics reveals a smooth transition from the atomic to the molecular BEC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a top down approach for integrated process modelling and distributed process execution. The integrated process model can be utilized for global monitoring and visualization and distributed process models for local execution. Our main focus in this paper is the presentation of the approach to support automatic generation and linking of distributed process models from an integrated process definition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As ações de maior liquidez do índice IBOVESPA, refletem o comportamento das ações de um modo geral, bem como a relação das variáveis macroeconômicas em seu comportamento e estão entre as mais negociadas no mercado de capitais brasileiro. Desta forma, pode-se entender que há reflexos de fatores que impactam as empresas de maior liquidez que definem o comportamento das variáveis macroeconômicas e que o inverso também é uma verdade, oscilações nos fatores macroeconômicos também afetam as ações de maior liquidez, como IPCA, PIB, SELIC e Taxa de Câmbio. O estudo propõe uma análise da relação existente entre variáveis macroeconômicas e o comportamento das ações de maior liquidez do índice IBOVESPA, corroborando com estudos que buscam entender a influência de fatores macroeconômicos sobre o preço de ações e contribuindo empiricamente com a formação de portfólios de investimento. O trabalho abrangeu o período de 2008 a 2014. Os resultados concluíram que a formação de carteiras, visando a proteção do capital investido, deve conter ativos com correlação negativa em relação às variáveis estudadas, o que torna possível a composição de uma carteira com risco reduzido.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The starting point of this research was the belief that manufacturing and similar industries need help with the concept of e-business, especially in assessing the relevance of possible e-business initiatives. The research hypotheses was that it should be possible to produce a systematic model that defines, at a useful level of detail, the probable e-business requirements of an organisation based on objective criteria with an accuracy of 85%-90%. This thesis describes the development and validation of such a model. A preliminary model was developed from a variety of sources, including a survey of current and planned e-business activity and representative examples of e-business material produced by e-business solution providers. The model was subject to a process of testing and refinement based on recursive case studies, with controls over the improving accuracy and stability of the model. Useful conclusions were also possible as to the relevance of e-business functions to the case study participants themselves. Techniques were evolved to synthesise the e-business requirements of an organisation and present them at a management summary level of detail. The results of applying these techniques to all the case studies used in this research were discussed. The conclusion of the research was that the case study methodology employed was successful. A model was achieved suitable for practical application in a manufacturing organisation requiring help with a requirements definition process.