938 resultados para Product-specific model
Resumo:
O objectivo deste trabalho científico é o estudo do transporte vertical de momento linear horizontal (CMT) realizado por sistemas de nuvens de convecção profunda sobre o oceano tropical. Para realizar este estudo, foram utilizadas simulações tridimensionais produzidas por um modelo explícito de nuvens (CRM) para os quatro meses de duração da campanha observacional TOGA COARE que ocorreu sobre as águas quentes do Pacífico ocidental. O estudo foca essencialmente as características estatísticas e à escala da nuvem do CMT durante um episódio de fortes ventos de oeste e durante um período de tempo maior que incluí este evento de convecção profunda. As distribuições verticais e altitude-temporais de campos atmosféricos relacionados com o CMT são avaliadas relativamente aos campos observacionais disponíveis, mostrando um bom acordo com os resultados de estudos anteriores, confirmando assim a boa qualidade das primeiras e fornecendo a confiança necessária para continuar a investigação. A sensibilidade do CMT em relação do domínio espacial do model é analisada, utilizando dois tipos de simulações tridimensionais produzidas por domínios horizontais de diferente dimensão, sugerindo que o CMT não depende da dimensão do domínio espacial horizontal escolhido para simular esta variável. A capacidade da parameterização do comprimento de mistura simular o CMT é testada, destacando as regiões troposféricas onde os fluxos de momento linear horizontal são no sentido do gradiente ou contra o gradiente. Os fluxos no sentido do gradiente apresentam-se relacionados a uma fraca correlação entre os campos atmosféricos que caracterizam esta parameterização, sugerindo que as formulações dos fluxos de massa dentro da nuvem e o fenómeno de arrastamento do ar para dentro da nuvem devem ser revistos. A importância do ar saturado e não saturado para o CMT é estudada com o objectivo de alcançar um melhor entendimento acerca dos mecanismos físicos responsáveis pelo CMT. O ar não saturado e saturado na forma de correntes descendentes contribuem de forma determinante para o CMT e deverão ser considerados em futuras parameterizações do CMT e da convecção em nuvens cumulus. Métodos de agrupamento foram aplicados às contribuições do ar saturado e não saturado, analisando os campos da força de flutuação e da velocidade vertical da partícula de ar, concluindo-se a presença de ondas gravíticas internas como mecanismo responsável pelo ar não saturado. A força do gradiente de pressão dentro da nuvem é também avaliada, utilizando para este efeito a fórmula teórica proposta por Gregory et al. (1997). Uma boa correlação entre esta força e o produto entre efeito de cisalhamento do vento e a perturbação da velocidade vertical é registada, principalmente para as correntes ascendentes dentro da nuvem durante o episódio de convecção profunda. No entanto, o valor ideal para o coeficiente empírico c*, que caracteriza a influência da força do gradiente de pressão dentro da nuvem sobre a variação vertical da velocidade horizontal dentro da nuvem, não é satisfatoriamente alcançado. Bons resultados são alcançados através do teste feito à aproximação do fluxo de massa proposta por Kershaw e Gregory (1997) para o cálculo do CMT total, revelando mais uma vez a importância do ar não saturado para o CMT.
Resumo:
Recent developments of high-end processors recognize temperature monitoring and tuning as one of the main challenges towards achieving higher performance given the growing power and temperature constraints. To address this challenge, one needs both suitable thermal energy abstraction and corresponding instrumentation. Our model is based on application-specific parameters such as power consumption, execution time, and asymptotic temperature as well as hardware-specific parameters such as half time for thermal rise or fall. As observed with our out-of-band instrumentation and monitoring infrastructure, the temperature changes follow a relatively slow capacitor-style charge-discharge process. Therefore, we use the lumped thermal model that initiates an exponential process whenever there is a change in processor’s power consumption. Initial experiments with two codes – Firestarter and Nekbone – validate our thermal energy model and demonstrate its use for analyzing and potentially improving the application-specific balance between temperature, power, and performance.
Resumo:
In this paper we study a delay mathematical model for the dynamics of HIV in HIV-specific CD4 + T helper cells. We modify the model presented by Roy and Wodarz in 2012, where the HIV dynamics is studied, considering a single CD4 + T cell population. Non-specific helper cells are included as alternative target cell population, to account for macrophages and dendritic cells. In this paper, we include two types of delay: (1) a latent period between the time target cells are contacted by the virus particles and the time the virions enter the cells and; (2) virus production period for new virions to be produced within and released from the infected cells. We compute the reproduction number of the model, R0, and the local stability of the disease free equilibrium and of the endemic equilibrium. We find that for values of R0<1, the model approaches asymptotically the disease free equilibrium. For values of R0>1, the model approximates asymptotically the endemic equilibrium. We observe numerically the phenomenon of backward bifurcation for values of R0⪅1. This statement will be proved in future work. We also vary the values of the latent period and the production period of infected cells and free virus. We conclude that increasing these values translates in a decrease of the reproduction number. Thus, a good strategy to control the HIV virus should focus on drugs to prolong the latent period and/or slow down the virus production. These results suggest that the model is mathematically and epidemiologically well-posed.
Resumo:
The following work project illustrates the strategic issues There App, a mobile application, faces regarding the opportunity to expand from its current state as a product to a multisided platform. Initially, a market analysis is performed to identify the ideal customer groups to be integrated in the platform. Strategic design issues are then discussed on how to best match its value proposition with the identified market opportunity. Suggestions on how the company should organize its resources and operational processes to best deliver on its value proposition complete the work.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
The aim of this study is to investigate the role of operational flexibility for effective project management in the construction industry. The specific objectives are to: a) Identify the determinants of operational flexibility potential in construction project management b) Investigate the contribution of each of the determinants to operational flexibility potential in the construction industry c) Investigate on the moderating factors of operational flexibility potential in a construction project environment d) Investigate whether moderated operational flexibility potential mediates the path between predictors and effective construction project management e) Develop and test a conceptual model of achieving operational flexibility for effective project management The purpose of this study is to findout ways to utilize flexibility inorder to manage uncertain project environment and ultimately achieve effective project management. In what configuration these operational flexibility determinants are demanded by construction project environment in order to achieve project success. This research was conducted in three phases, namely: (i) exploratory phase (ii) questionnaire development phase; and (iii) data collection and analysis phase. The study needs firm level analysis and therefore real estate developers who are members of CREDAI, Kerala Chapter were considered. This study provides a framework on the functioning of operational flexibility, offering guidance to researchers and practitioners for discovering means to gain operational flexibility in construction firms. The findings provide an empirical understanding on kinds of resources and capabilities a construction firm must accumulate to respond flexibly to the changing project environment offering practitioners insights into practices that build firms operational flexibility potential. Firms are dealing with complex, continuous changing and uncertain environments due trends of globalization, technical changes and innovations and changes in the customers’ needs and expectations. To cope with the increasingly uncertain and quickly changing environment firms strive for flexibility. To achieve the level of flexibility that adds value to the customers, firms should look to flexibility from a day to day operational perspective. Each dimension of operational flexibility is derived from competences and capabilities. In this thesis only the influence on customer satisfaction and learning exploitation of flexibility dimensions which directly add value in the customers eyes are studied to answer the followingresearch questions: “What is the impact of operational flexibility on customer satisfaction?.” What are the predictors of operational flexibility in construction industry? .These questions can only be answered after answering the questions like “Why do firms need operational flexibility?” and “how can firms achieve operational flexibility?” in the context of the construction industry. The need for construction firms to be flexible, via the effective utilization of organizational resources and capabilities for improved responsiveness, is important because of the increasing rate of changes in the business environment within which they operate. Achieving operational flexibility is also important because it has a significant correlation with a project effectiveness and hence a firm’s turnover. It is essential for academics and practitioners to recognize that the attainment of operational flexibility involves different types namely: (i) Modification (ii) new product development and (iii) demand management requires different configurations of predictors (i.e., resources, capabilities and strategies). Construction firms should consider these relationships and implement appropriate management practices for developing and configuring the right kind of resources, capabilities and strategies towards achieving different operational flexibility types.
Resumo:
“What is value in product development?” is the key question of this paper. The answer is critical to the creation of lean in product development. By knowing how much value is added by product development (PD) activities, decisions can be more rationally made about how to allocate resources, such as time and money. In order to apply the principles of Lean Thinking and remove waste from the product development system, value must be precisely defined. Unfortunately, value is a complex entity that is composed of many dimensions and has thus far eluded definition on a local level. For this reason, research has been initiated on “Measuring Value in Product Development.” This paper serves as an introduction to this research. It presents the current understanding of value in PD, the critical questions involved, and a specific research design to guide the development of a methodology for measuring value. Work in PD value currently focuses on either high-level perspectives on value, or detailed looks at the attributes that value might have locally in the PD process. Models that attempt to capture value in PD are reviewed. These methods, however, do not capture the depth necessary to allow for application. A methodology is needed to evaluate activities on a local level to determine the amount of value they add and their sensitivity with respect to performance, cost, time, and risk. Two conceptual tools are proposed. The first is a conceptual framework for value creation in PD, referred to here as the Value Creation Model. The second tool is the Value-Activity Map, which shows the relationships between specific activities and value attributes. These maps will allow a better understanding of the development of value in PD, will facilitate comparison of value development between separate projects, and will provide the information necessary to adapt process analysis tools (such as DSM) to consider value. The key questions that this research entails are: · What are the primary attributes of lifecycle value within PD? · How can one model the creation of value in a specific PD process? · Can a useful methodology be developed to quantify value in PD processes? · What are the tools necessary for application? · What PD metrics will be integrated with the necessary tools? The research milestones are: · Collection of value attributes and activities (September, 200) · Development of methodology of value-activity association (October, 2000) · Testing and refinement of the methodology (January, 2001) · Tool Development (March, 2001) · Present findings at July INCOSE conference (April, 2001) · Deliver thesis that captures a formalized methodology for defining value in PD (including LEM data sheets) (June, 2001) The research design aims for the development of two primary deliverables: a methodology to guide the incorporation of value, and a product development tool that will allow direct application.
Resumo:
The major component of skeletal muscle is the myofibre. Genetic intervention inducing over-enlargement of myofibres beyond a certain threshold through acellular growth causes a reduction in the specific tension generating capacity of the muscle. However the physiological parameters of a genetic model that harbours reduced skeletal muscle mass have yet to be analysed. Genetic deletion of Meox2 in mice leads to reduced limb muscle size and causes some patterning defects. The loss of Meox2 is not embryonically lethal and a small percentage of animals survive to adulthood making it an excellent model with which to investigate how skeletal muscle responds to reductions in mass. In this study we have performed a detailed analysis of both late foetal and adult muscle development in the absence of Meox2. In the adult, we show that the loss of Meox2 results in smaller limb muscles that harbour reduced numbers of myofibres. However, these fibres are enlarged. These myofibres display a molecular and metabolic fibre type switch towards a more oxidative phenotype that is induced through abnormalities in foetal fibre formation. In spite of these changes, the muscle from Meox2 mutant mice is able to generate increased levels of specific tension compared to that of the wild type.
Resumo:
We study cartel stability in a differentiated price-setting duopoly with returns to scale. We show that a cartel may be equally stable in the presence of lower differentiation, provided that the decreasing returns parameter is high. In addition we demonstrate that for a given factor of discount, there are technologies that can have decreasing returns to scale where the cartel always is stable independent of the differentiation degree.
Resumo:
Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.
Resumo:
The impact of two different coupled cirrus microphysics-radiation parameterizations on the zonally averaged temperature and humidity biases in the tropical tropopause layer (TTL) of a Met Office climate model configuration is assessed. One parameterization is based on a linear coupling between a model prognostic variable, the ice mass mixing ratio, qi, and the integral optical properties. The second is based on the integral optical properties being parameterized as functions of qi and temperature, Tc, where the mass coefficients (i.e. scattering and extinction) are parameterized as nonlinear functions of the ratio between qi and Tc. The cirrus microphysics parameterization is based on a moment estimation parameterization of the particle size distribution (PSD), which relates the mass moment (i.e. second moment if mass is proportional to size raised to the power of 2 ) of the PSD to all other PSD moments through the magnitude of the second moment and Tc. This same microphysics PSD parameterization is applied to calculate the integral optical properties used in both radiation parameterizations and, thus, ensures PSD and mass consistency between the cirrus microphysics and radiation schemes. In this paper, the temperature-non-dependent and temperature-dependent parameterizations are shown to increase and decrease the zonally averaged temperature biases in the TTL by about 1 K, respectively. The temperature-dependent radiation parameterization is further demonstrated to have a positive impact on the specific humidity biases in the TTL, as well as decreasing the shortwave and longwave biases in the cloudy radiative effect. The temperature-dependent radiation parameterization is shown to be more consistent with TTL and global radiation observations.
Resumo:
Agent-oriented software engineering and software product lines are two promising software engineering techniques. Recent research work has been exploring their integration, namely multi-agent systems product lines (MAS-PLs), to promote reuse and variability management in the context of complex software systems. However, current product derivation approaches do not provide specific mechanisms to deal with MAS-PLs. This is essential because they typically encompass several concerns (e.g., trust, coordination, transaction, state persistence) that are constructed on the basis of heterogeneous technologies (e.g., object-oriented frameworks and platforms). In this paper, we propose the use of multi-level models to support the configuration knowledge specification and automatic product derivation of MAS-PLs. Our approach provides an agent-specific architecture model that uses abstractions and instantiation rules that are relevant to this application domain. In order to evaluate the feasibility and effectiveness of the proposed approach, we have implemented it as an extension of an existing product derivation tool, called GenArch. The approach has also been evaluated through the automatic instantiation of two MAS-PLs, demonstrating its potential and benefits to product derivation and configuration knowledge specification.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)