996 resultados para Precede-Proceed model
Resumo:
We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.
Resumo:
Tämä työ tehtiin globaaliin elektroniikka-alan yritykseen. Diplomityö liittyy haasteeseen, jonka lisääntynyt globalisaatio ja kiristyvä kilpailu ovat luoneet: case yrityksen on selvitettävä kuinka se voi saavuttaa kasvutavoitteet myös tulevaisuudessa hankkimalla uusia asiakkaita ja olemalla yhä enenevissä määrin maailmanlaajuisesti läsnä. Tutkimuksen tavoite oli löytää sopiva malli potentiaalisten avainasiakkaiden identifiointiin ja valintaan, sekä testata ja modifioida valittua mallia case yrityksen tarpeiden mukaisesti. Erityisesti raakadatan kerääminen, asiakkaiden houkuttelevuuskriteerit ja kohdemarkkinarako olivat asioita, jotka tarvitsivat tutkimuksessa huomiota. Kirjallisuuskatsauksessa keskityttiin yritysmarkkinoihin, eri asiakassuhteenhallinnan lähestymistapoihin ja avainasiakkaiden määrittämiseen. CRM:n, KAM:n ja Customer Insight-ajattelun perusteet esiteltiin yhdessä eri avainasiakkaiden identifiointimallien kanssa. Valittua Chevertonin mallia testattiin ja muokattiin työn empiirisessä osassa. Tutkimuksen empiirinen kontribuutio on modifioitu malli potentiaalisten avainasiakkaiden identifiointiin. Se auttaa päätöksentekijöitä etenemään systemaattisesti ja organisoidusti askel askeleelta kohti potentiaalisten asiakkaiden listaa tietyltä markkina-alueelta. Työ tarjoaa työkalun tähän prosessiin sekä luo pohjaa tulevaisuuden tutkimukselle ja toimenpiteille.
Resumo:
Liver fibrosis occurring as an outcome of non-alcoholic steatohepatitis (NASH) can precede the development of cirrhosis. We investigated the effects of sorafenib in preventing liver fibrosis in a rodent model of NASH. Adult Sprague-Dawley rats were fed a choline-deficient high-fat diet and exposed to diethylnitrosamine for 6 weeks. The NASH group (n=10) received vehicle and the sorafenib group (n=10) received 2.5 mg·kg-1·day-1 by gavage. A control group (n=4) received only standard diet and vehicle. Following treatment, animals were sacrificed and liver tissue was collected for histologic examination, mRNA isolation, and analysis of mitochondrial function. Genes related to fibrosis (MMP9, TIMP1, TIMP2), oxidative stress (HSP60, HSP90, GST), and mitochondrial biogenesis (PGC1α) were evaluated by real-time quantitative polymerase chain reaction (RT-qPCR). Liver mitochondrial oxidation activity was measured by a polarographic method, and cytokines by enzyme-linked immunosorbent assay (ELISA). Sorafenib treatment restored mitochondrial function and reduced collagen deposition by nearly 63% compared to the NASH group. Sorafenib upregulated PGC1α and MMP9 and reduced TIMP1 and TIMP2 mRNA and IL-6 and IL-10 protein expression. There were no differences in HSP60, HSP90 and GST expression. Sorafenib modulated PGC1α expression, improved mitochondrial respiration and prevented collagen deposition. It may, therefore, be useful in the treatment of liver fibrosis in NASH.
Local attractors, degeneracy and analyticity: Symmetry effects on the locally coupled Kuramoto model
Resumo:
In this work we study the local coupled Kuramoto model with periodic boundary conditions. Our main objective is to show how analytical solutions may be obtained from symmetry assumptions, and while we proceed on our endeavor we show apart from the existence of local attractors, some unexpected features resulting from the symmetry properties, such as intermittent and chaotic period phase slips, degeneracy of stable solutions and double bifurcation composition. As a result of our analysis, we show that stable fixed points in the synchronized region may be obtained with just a small amount of the existent solutions, and for a class of natural frequencies configuration we show analytical expressions for the critical synchronization coupling as a function of the number of oscillators, both exact and asymptotic. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Transient inflammation is known to alter visceral sensory function and frequently precede the onset of symptoms in a subgroup of patients with irritable bowel syndrome (IBS). Duration and severity of the initial inflammatory stimulus appear to be risk factors for the manifestation of symptoms. Therefore, we aimed to characterize dose-dependent effects of trinitrobenzenesulfonic acid (TNBS)/ethanol on: (1) colonic mucosa, (2) cytokine release and (3) visceral sensory function in a rat model. Acute inflammation was induced in male Lewis rats by single administration of various doses of TNBS/ethanol (total of 0.8, 0.4 or 0.2 ml) in test animals or saline in controls. Assessment of visceromotor response (VMR) to colorectal distensions, histological evaluation of severity of inflammation, and measurement of pro-inflammatory cytokine levels (IL-2, IL-6) using enzyme-linked immunosorbent assay (ELISA) were performed 2h and 3, 14, 28, 31 and 42 days after induction. Increased serum IL-2 and IL-6 levels were evident prior to mucosal lesions 2h after induction of colitis and persist up to 14 days (p<0.05 vs. saline), although no histological signs of inflammation were detected at 14 days. In the acute phase, VMR was only significantly increased after 0.8 ml and 0.4 ml TNBS/ethanol (p<0.05 vs. saline). After 28 days, distension-evoked responses were persistently elevated (p<0.05 vs. saline) in 0.8 and 0.4 ml TNBS/ethanol-treated rats. In 0.2 ml TNBS/ethanol group, VMR was only enhanced after repeated visceral stimulation. Visceral hyperalgesia occurs after a transient colitis. However, even a mild acute but asymptomatic colitis can induce long-lasting visceral hyperalgesia in the presence of additional stimuli.
Resumo:
The 14.5 kDa (galectin-1) and 31 kDa (galectin-3) lectins are the most well characterized members of a family of vertebrate carbohydrate-binding proteins known as the galectins. Evidence has been obtained implicating these galectins in events as diverse as cell-cell and cell-extracellular matrix interactions, growth regulation, transformation, differentiation, and programmed cell death. In the present study, sodium butyrate was found to be a potent inducer of galectin-1 in the KM12 human colon carcinoma cell line. Prior to treatment with butyrate this cell line expresses only galectin-3. These cells were utilized as an in vitro model system to study galectin expression as well as that of their endogenous ligands. The initial phase of this project involved the examination of the induction of galectin-1 by butyrate at the protein level. These studies indicated that galectin-1 induction by butyrate was relatively rapid reaching nearly maximal levels after only 24 hours. Additionally, the induction was found to be reversible upon the removal of butyrate and to precede the increase in expression of the well characterized differentiation marker, carcinoembryonic antigen (CEA). The second phase of this project involved the characterization of potential glycoprotein ligands for galectin-1 and galectin-3. This work demonstrated that the polylactosaminoglycan-containing glycoproteins laminin, CEA, and the lysosome-associated glycoproteins-1 and -2 (LAMPs-1 and -2) are capable of serving as ligands for both galectin-1 and -3. The third phase of this project involved the analysis of the induction of the galectin-1 promoter by butyrate. Through the analysis of deletion constructs transiently transfected into KM12 cells, the region of the galectin-1 promoter mediating a high level of induction by butyrate was localized primarily within a proximal portion of the promoter containing a CCAAT element and an Sp1 binding site. The CCAAT-binding activity in the KM12 nuclear extracts was subsequently dentified as NF-Y by gel shift analysis. These studies suggest that: (1) the galectins may be involved in modulating adhesive interactions in human colon carcinoma cells through the binding of several polylactosaminoglycans shown to play a role in adhesion and (2) high level induction of the galectin-1 promoter by butyrate can proceed through a discreet, proximal element containing an NF-Y-binding CCAAT box and an Sp1 site. ^
Resumo:
A multi-model analysis of Atlantic multidecadal variability is performed with the following aims: to investigate the similarities to observations; to assess the strength and relative importance of the different elements of the mechanism proposed by Delworth et al. (J Clim 6:1993–2011, 1993) (hereafter D93) among coupled general circulation models (CGCMs); and to relate model differences to mean systematic error. The analysis is performed with long control simulations from ten CGCMs, with lengths ranging between 500 and 3600 years. In most models the variations of sea surface temperature (SST) averaged over North Atlantic show considerable power on multidecadal time scales, but with different periodicity. The SST variations are largest in the mid-latitude region, consistent with the short instrumental record. Despite large differences in model configurations, we find quite some consistency among the models in terms of processes. In eight of the ten models the mid-latitude SST variations are significantly correlated with fluctuations in the Atlantic meridional overturning circulation (AMOC), suggesting a link to northward heat transport changes. Consistent with this link, the three models with the weakest AMOC have the largest cold SST bias in the North Atlantic. There is no linear relationship on decadal timescales between AMOC and North Atlantic Oscillation in the models. Analysis of the key elements of the D93 mechanisms revealed the following: Most models present strong evidence that high-latitude winter mixing precede AMOC changes. However, the regions of wintertime convection differ among models. In most models salinity-induced density anomalies in the convective region tend to lead AMOC, while temperature-induced density anomalies lead AMOC only in one model. However, analysis shows that salinity may play an overly important role in most models, because of cold temperature biases in their relevant convective regions. In most models subpolar gyre variations tend to lead AMOC changes, and this relation is strong in more than half of the models.
Resumo:
Open innovation is increasingly being adopted in business and describes a situation in which firms exchange ideas and knowledge with external participants, such as customers, suppliers, partner firms, and universities. This article extends the concept of open innovation with a push model of open innovation: knowledge is voluntarily created outside a firm by individuals and organisations who proceed to push knowledge into a firm’s open innovation project. For empirical analysis, we examine source code and newsgroup data on the Eclipse Development Platform. We find that outsiders invest as much in the firm’s project as the founding firm itself. Based on the insights from Eclipse, we develop four propositions: ‘preemptive generosity’ of a firm, ‘continuous commitment’, ‘adaptive governance structure’, and ‘low entry barrier’ are contexts that enable the push model of open innovation.
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
Recombinational repair of double-stranded DNA gaps was investigated in Ustilago maydis. The experimental system was designed for analysis of repair of an autonomously replicating plasmid containing a cloned gene disabled by an internal deletion. It was discovered that crossing over rarely accompanied gap repair. The strong bias against crossing over was observed in three different genes regardless of gap size. These results indicate that gap repair in U. maydis is unlikely to proceed by the mechanism envisioned in the double-stranded break repair model of recombination, which was developed to account for recombination in Saccharomyces cerevisiae. Experiments aimed at exploring processing of DNA ends were performed to gain understanding of the mechanism responsible for the observed bias. A heterologous insert placed within a gap in the coding sequence of two different marker genes strongly inhibited repair if the DNA was cleaved at the promoter-proximal junction joining the insert and coding sequence but had little effect on repair if the DNA was cleaved at the promoter-distal junction. Gene conversion of plasmid restriction fragment length polymorphism markers engineered in sequences flanking both sides of a gap accompanied repair but was directionally biased. These results are interpreted to mean that the DNA ends flanking a gap are subject to different types of processing. A model featuring a single migrating D-loop is proposed to explain the bias in gap repair outcome based on the observed asymmetry in processing the DNA ends.
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
Many papers claim that a Log Periodic Power Law (LPPL) model fitted to financial market bubbles that precede large market falls or 'crashes', contains parameters that are confined within certain ranges. Further, it is claimed that the underlying model is based on influence percolation and a martingale condition. This paper examines these claims and their validity for capturing large price falls in the Hang Seng stock market index over the period 1970 to 2008. The fitted LPPLs have parameter values within the ranges specified post hoc by Johansen and Sornette (2001) for only seven of these 11 crashes. Interestingly, the LPPL fit could have predicted the substantial fall in the Hang Seng index during the recent global downturn. Overall, the mechanism posited as underlying the LPPL model does not do so, and the data used to support the fit of the LPPL model to bubbles does so only partially. © 2013.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.