938 resultados para Product-specific model
Resumo:
Inicia-se o trabalho, reconstituindo-se, sob um ponto de vista histórico, o problema do dualismo e do monismo, na Psicologia. A reconstituição é feita partindo-se de uma origem situada em Sócrates e, dai, desenvolvendo-se até os dias atuais, onde, demonstra-se, a questão permanece. A identificação daquela origem foi determinada pela circunstância de, alí, o problema ter merecido um estudo sistematizado e ter se caracterizado como metafísico. Entendendo-se, com isso, que a questão a resolver era a respeito do que existiria como entidade autônoma. Neste caso, então, se apenas o "corpo", se apenas a "mente" ou se os dois. As soluções que propunham a existência só da mente (monistas da mente), ou de mente e corpo, enquanto entidades distintas (dualistas) viriam a ser, portanto, decisivas para a própria concepção da Psicologia. Como se afirma ser, a partir de uma decisão referente ao problema anterior, que se deva desenvolver uma Psicologia cientifica, estabelece-se, no capítulo II, as concepções adotadas para Ciência, conhecimento científico e método científico. Ali, aproveita-se para justificar porque parte do estudo deve cair sob o domínio da Filosofia da Ciência, como um todo e da Filosofia da Psicologia, em particular. No capítulo III, volta-se a demonstrar com maior ênfase, que o problema "mente-corpo" é, ainda hoje, metafísico e requer uma tomada de decisão, naqueles termos. Mostra-se que a decisão é sempre tomada quando nada como pressuposto, senão explícito, pelo menos implícito. Uma vez tendo-se demonstrado que só o corpo pode ser afirmado como representando alguma coisa que exista, em termos reais, no sentido metafísico, parte-se para o estabelecimento daquele que seria o autêntico objeto de estudo da Psicologia. Fazendo-se, então, uso de uma Semântica Filosófica "crítico-realista", demonstra-se que ele termina sendo: o conjunto de propriedades do objeto real representado pelo corpo e responsáveis pelas manifestações pelas quais a Psicologia, por uma tradição de investigação, sempre se interessou. Finalmente, no capítulo IV, concebe-se um modelo sistêmico para representar a Natureza. Nele vige a 'lei' da transformação, que resulta da' interação entre os subsistemas. Entre os subsistemas existem aqueles que representam objetos reais e são designados como "Corpo Humano". Estes estão sujeitos à mesma 'lei'. A partir da transformação do U23592 em Pb20782, constrói-se duas funções matemáticas, com base na teoria dos conjuntos, para demonstrar-se como funciona a lei da transformação ou a função transformação, aplicável a todos os subsistemas, que são elementos do Sistema que representa a Natureza. Dessas construções e mais algumas, ao serem aplicadas aos subsistemas que representam os objetos reais denotados como Corpo Humano, extrai-se um grande número de consequências para a Psicologia. Termina-se apresentando um modelo específico para representar o objeto real denotado por Corpo Humano. Este, como subsistema, também é um sistema e composto de quatro subsistemas: Motor, Emocional, Perceptivo e Cognitivo. O todo e as partes passam a funcionar regidos pela lei da Transformação.
Resumo:
Based on three versions of a small macroeconomic model for Brazil, this paper presents empirical evidence on the effects of parameter uncertainty on monetary policy rules and on the robustness of optimal and simple rules over different model specifications. By comparing the optimal policy rule under parameter uncertainty with the rule calculated under purely additive uncertainty, we find that parameter uncertainty should make policymakers react less aggressively to the economy's state variables, as suggested by Brainard's "conservatism principIe", although this effect seems to be relatively small. We then informally investigate each rule's robustness by analyzing the performance of policy rules derived from each model under each one of the alternative models. We find that optimal rules derived from each model perform very poorly under alternative models, whereas a simple Taylor rule is relatively robusto We also fmd that even within a specific model, the Taylor rule may perform better than the optimal rule under particularly unfavorable realizations from the policymaker' s loss distribution function.
Resumo:
This thesis elaborates the creation of value in private equity and in particular analyzes value creation in 3G Capital’s acquisition of Burger King. In this sense, a specific model is applied that composes value creation into several drivers, in order to answer the question of how value creation can be addressed in private equity investments. Although previous research by Achleitner et al. (2010) introduced a specific model that addresses value creation in private equity, the respective model was neither applied to an individual company, nor linked to indirect drivers that explain the dynamics and rationales for the creation of value. In turn this paper applies the quantitative model to an ongoing private equity investment and thereby provides different extensions to turn the model into a better forecasting model for ongoing investments, instead of only analyzing a deal that has already been divested from an ex post perspective. The chosen research approach is a case study about the Burger King buyout that first includes an extensive review about the current status of academic literature, second a quantitative calculation and qualitative interpretation of different direct value drivers, third a qualitative breakdown of indirect drivers, and lastly a recapitulating discussion about value creation and value drivers. Presenting a very successful private equity investment and elaborately demonstrating the dynamics and mechanisms that drive value creation in this case, provides important implications for other private equity firms as well as public firms in order to develop their proprietary approach towards value creation.
Resumo:
Currently, due to part of world is focalized to petroleum, many researches with this theme have been advanced to make possible the production into reservoirs which were classified as unviable. Because of geological and operational challenges presented to oil recovery, more and more efficient methods which are economically successful have been searched. In this background, steam flood is in evidence mainly when it is combined with other procedures to purpose low costs and high recovery factors. This work utilized nitrogen as an alternative fluid after steam flood to adjust the best combination of alternation between these fluids in terms of time and rate injection. To describe the simplified economic profile, many analysis based on liquid cumulative production were performed. The completion interval and injection fluid rates were fixed and the oil viscosity was ranged at 300 cP, 1.000 cP and 3.000 cP. The results defined, for each viscosity, one specific model indicating the best period to stop the introduction of steam and insertion of nitrogen, when the first injected fluid reached its economic limit. Simulations in physics model defined from one-eighth nine-spot inverted were realized using the commercial simulator Steam, Thermal and Advanced Processes Reservoir Simulator STARS of Computer Modelling Group CMG
Resumo:
Research on Wireless Sensor Networks (WSN) has evolved, with potential applications in several domains. However, the building of WSN applications is hampered by the need of programming in low-level abstractions provided by sensor OS and of specific knowledge about each application domain and each sensor platform. We propose a MDA approach do develop WSN applications. This approach allows domain experts to directly contribute in the developing of applications without needing low level knowledge on WSN platforms and, at the same time, it allows network experts to program WSN nodes to met application requirements without specific knowledge on the application domain. Our approach also promotes the reuse of the developed software artifacts, allowing an application model to be reused across different sensor platforms and a platform model to be reused for different applications
Resumo:
The tracking between models of the requirements and architecture activities is a strategy that aims to prevent loss of information, reducing the gap between these two initial activities of the software life cycle. In the context of Software Product Lines (SPL), it is important to have this support, which allows the correspondence between this two activities, with management of variability. In order to address this issue, this paper presents a process of bidirectional mapping, defining transformation rules between elements of a goaloriented requirements model (described in PL-AOVgraph) and elements of an architectural description (defined in PL-AspectualACME). These mapping rules are evaluated using a case study: the GingaForAll LPS. To automate this transformation, we developed the MaRiPLA tool (Mapping Requirements to Product Line Architecture), through MDD techniques (Modeldriven Development), including Atlas Transformation Language (ATL) with specification of Ecore metamodels jointly with Xtext , a DSL definition framework, and Acceleo, a code generation tool, in Eclipse environment. Finally, the generated models are evaluated based on quality attributes such as variability, derivability, reusability, correctness, traceability, completeness, evolvability and maintainability, extracted from the CAFÉ Quality Model
Resumo:
We have searched for a heavy resonance decaying into a Z+jet final state in p (p) over bar collisions at a center of mass energy of 1.96 TeV at the Fermilab Tevatron collider using the D0 detector. No indication for such a resonance was found in a data sample corresponding to an integrated luminosity of 370 pb(-1). We set upper limits on the cross section times branching fraction for heavy resonance production at the 95% C.L. as a function of the resonance mass and width. The limits are interpreted within the framework of a specific model of excited quark production.
Resumo:
The assessment of welfare issues has been a challenge for poultry producers, and lately welfare standards needs to be reached in order to agree with international market demand. This research proposes the use of continuous behavior monitoring in order to contribute for assessing welfare. A software was developed using the language Clarium. The software managed the recording of data as well as the data searching in the database Firebird. Both software and the observational methodology were tested in a trial conducted inside an environmental chamber, using three genetics of broiler breeders. Behavioral pattern was recorded and correlated to ambient thermal and aerial variation. Monitoring video cameras were placed on the roof facing the used for registering the bird's behavior. From video camera images were recorded during the total period when the ambient was bright, and for analyzing the video images a sample of 15min observation in the morning and 15 min in the afternoon was used, adding up to 30 min daily observation. A specific model so-called behavior was developed inside the software for counting specific behavior and its frequency of occurrence, as well as its duration. Electronic identification was recorded for 24h period. Behavioral video recording images was related to the data recorded using electronic identification.. Statistical analysis of data allowed to identify behavioral differences related to the change in thermal environment, and ultimately indicating thermal stress and departure from welfare conditions.
Resumo:
In this letter we consider a specific model of braneworld with nonstandard dynamics diffused in the literature, specifically we focus our attention on the matter energy density, the energy of system, the Ricci scalar and the thin-brane limit. As the model is classically stable and capable of localize gravity, as a natural extension we address the issue of fermion localization of fermions on a thick brane constructed out from one scalar field with nonstandard kinetic terms coupled with gravity. The contribution of the nonstandard kinetic terms to the problem of fermion localization is analyzed. It is found that the simplest Yukawa coupling η ωφ ω supports the localization of fermions on the thick brane. It is shown that the zero mode for left-handed fermions can be localized on the thick brane depending on the values for the coupling constant η. Copyright © EPLA, 2013.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A Amazônia tem imensos recursos florestais, abrigando um terço das florestas tropicais do mundo. A Amazônia brasileira compreende uma área maior que 5 milhões de km2, o que corresponde a 61 % do território brasileiro. A região norte produz 72,45% da madeira em tora do Brasil, o estado do Pará contribui com 55,47% de acordo com IBMA (2007). A exploração madeireira na Amazônia é caracterizada como “garimpagem florestal”, ou seja, os exploradores entram na floresta selecionam as toras de valor comercial e a retiram. Passando-se certo tempo, eles voltam novamente a essa área e a exploram, esse processo de exploração está acontecendo em um intervalo de tempo cada vez menor. A Amazônia legal abrigava 833 serrarias circulares em 1998. Essas serrarias estavam localizadas principalmente no estuário amazônico (71%) – nos furos e tributários dos rios Amazonas, Xingu, Tocantins e Pará. Essas processadoras familiares consumiram conjuntamente 1,3 milhões de metros cúbicos de madeira em tora (5% da produção da Amazônia). Neste trabalho estimou-se o balanço de carbono em serrarias do estuário do rio Amazonas e foi desenvolvido o ciclo de vida do carbono para uma serraria no estuarino amazônico. Foi identificado que no processo produtivo da comunidade há um caminho bem definido do recurso natural (biomassa/madeira): exploração florestal, transporte de biomassa, transformação (empresas madeireiras) / processos produtivos, geração e utilização de resíduos, transporte de madeira processada, comercialização/mercado. O objetivo deste trabalho foi avaliar os recursos energéticos através do fluxo (inputs e outputs) da madeira e da energia no processo. Para isso, desenvolveu um modelo que simulou os fluxos de carbono, da madeira e a área afetada pela exploração. Neste trabalho criou-se um modelo específico onde se avaliou o fluxo de carbono para o cenário estudado; a avaliação do impacto ambiental foi alcançada, onde obteve um valor positivo, uma captura de carbono cerca de 55 tCO2/mês, mesmo com a baixa eficiência do sistema produtivo, em torno de 36% conclui-se que o sistema atual de exploração não polui mas poderia ser melhorado a fim de alcançar uma maior eficiência do processo produtivo. Enquanto ao resíduo gerado aproximadamente 64% do volume de madeira que entra na serraria conseguira gerar aproximadamente 1240 kW de energia elétrica mensal.
Resumo:
Este simulador é formado pela junção de técnicas de Realidade Virtual com modelos de propagação, desenvolvidos através dos estudos de rádio enlace, que descrevem a perda que o sinal transmitido sofre ao longo do percurso no ambiente. O simulador possui dois módulos. O primeiro permite a criação do ambiente virtual com o posicionamento, sobre um terreno, de prédios, árvores, carros, antenas e outras primitivas que permitem a construção de um ambiente tridimensional customizável. O segundo módulo permite a configuração dos parâmetros relacionados a propagação de sinal de antenas como a potência, a frequência, o ganho, etc., e também selecionar o modelo de propagação para a execução da simulação. Dentro deste segundo módulo, existe um submódulo responsável pelo estudo do planejamento da área de cobertura composta pelas antenas, em outras palavras, este submódulo simula a distância que cada antena no cenário consegue atingir e gera a respectiva área de cobertura. Para demonstrar a eficiência do simulador foram criados dois ambientes virtuais para testes. Um cenário representando um ambiente urbano onde empregou-se um modelo de propagação clássico, Okumura-Hata para cidades pequenas e médias, e um ambiente tridimensional arborizado utilizando um modelo especifico para simulação de propagação para regiões densamente arborizadas, desenvolvido na Universidade Federal do Pará chamado de Lyra-Castro-UFPA.
Resumo:
The Th1/Th2 balance represents an important factor in the pathogenesis of renal ischemia-reperfusion injury (IRI). In addition, IRI causes a systemic inflammation that can affect other tissues, such as the lungs. To investigate the ability of renal IRI to modulate pulmonary function in a specific model of allergic inflammation, C57Bl/6 mice were immunized with ovalbumin/albumen on days 0 and 7 and challenged with an ovalbumin (OA) aerosol on days 14 and 21. After 24 h of the second antigen challenge, the animals were subjected to 45 minutes of ischemia. After 24 h of reperfusion, the bronchoalveolar lavage (BAL) fluid, blood and lung tissue were collected for analysis. Serum creatinine levels increased in both allergic and non-immunized animals subjected to IRI. However, BAL analysis showed a reduction in the total cells (46%) and neutrophils (58%) compared with control allergic animals not submitted to IRI. In addition, OA challenge induced the phosphorylation of ERK and Akt and the expression of inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) in lung homogenates. After renal IRI, the phosphorylation of ERK and expression of COX-2 and iNOS were markedly reduced; however, there was no difference in the phosphorylation of Akt between sham and ischemic OA-challenged animals. Mucus production was also reduced in allergic mice after renal IRI. IL-4, IL-5 and IL-13 were markedly down-regulated in immunized/challenged mice subjected to IRI. These results suggest that renal IRI can modulate lung allergic inflammation, probably by altering the Th1/Th2 balance and, at least in part, by changing cellular signal transduction factors. Copyright (C) 2012 S. Karger AG, Basel
Resumo:
The knee joint is a key structure of the human locomotor system. The knowledge of how each single anatomical structure of the knee contributes to determine the physiological function of the knee, is of fundamental importance for the development of new prostheses and novel clinical, surgical, and rehabilitative procedures. In this context, a modelling approach is necessary to estimate the biomechanic function of each anatomical structure during daily living activities. The main aim of this study was to obtain a subject-specific model of the knee joint of a selected healthy subject. In particular, 3D models of the cruciate ligaments and of the tibio-femoral articular contact were proposed and developed using accurate bony geometries and kinematics reliably recorded by means of nuclear magnetic resonance and 3D video-fluoroscopy from the selected subject. Regarding the model of the cruciate ligaments, each ligament was modelled with 25 linear-elastic elements paying particular attention to the anatomical twisting of the fibres. The devised model was as subject-specific as possible. The geometrical parameters were directly estimated from the experimental measurements, whereas the only mechanical parameter of the model, the elastic modulus, had to be considered from the literature because of the invasiveness of the needed measurements. Thus, the developed model was employed for simulations of stability tests and during living activities. Physiologically meaningful results were always obtained. Nevertheless, the lack of subject-specific mechanical characterization induced to design and partially develop a novel experimental method to characterize the mechanics of the human cruciate ligaments in living healthy subjects. Moreover, using the same subject-specific data, the tibio-femoral articular interaction was modelled investigating the location of the contact point during the execution of daily motor tasks and the contact area at the full extension with and without the whole body weight of the subject. Two different approaches were implemented and their efficiency was evaluated. Thus, pros and cons of each approach were discussed in order to suggest future improvements of this methodologies. The final results of this study will contribute to produce useful methodologies for the investigation of the in-vivo function and pathology of the knee joint during the execution of daily living activities. Thus, the developed methodologies will be useful tools for the development of new prostheses, tools and procedures both in research field and in diagnostic, surgical and rehabilitative fields.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.