983 resultados para Pinch-off model
Resumo:
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
En aquest treball s’implementa un model analític de les característiques DC del MOSFET de doble porta (DG-MOSFET), basat en la solució de l’equació de Poisson i en la teoria de deriva-difussió[1]. El MOSFET de doble porta asimètric presenta una gran flexibilitat en el disseny de la tensió llindar i del corrent OFF. El model analític reprodueix les característiques DC del DG-MOSFET de canal llarg i és la base per construir models circuitals tipus SPICE.
Resumo:
This paper provides a modelling framework for evaluating the exchange rate dynamics of a target zone regime with undisclosed bands. We generalize the literature to allow for asymmetric one-sided regimes. Market participants' beliefs concerning an undisclosed band change as they learn more about central bank intervention policy. We apply the model to Hong Kong's one-sided currency board mechanism. In autumn 2003, the Hong Kong dollar appreciated from close to 7.80 per US dollar to 7.70, as investors feared that the currency board would be abandoned. In the wake of this appreciation, the monetary authorities finally revamped the regime as a symmetric two-sided system with a narrow exchange rate band.
Resumo:
This paper studies party discipline in a congress within a political agency framework with retrospective voting. Party discipline serves as an incentive device to induce office- motivated congress members to perform in line with the party leadership's objective of controlling both the executive and the legislative branches of government. I show fi rst that the same party is more likely to control both branches of government (i.e., uni ed government) the stronger the party discipline in the congress is. Second, the leader of the governing party imposes more party discipline under uni ed government than does the opposition leader under divided government. Moreover, the incumbents' aggregate performance increases with party discipline, so a representative voter becomes better off. JEL classi cation: D72. Keywords: Party discipline; Political agency; Retrospective voting; Office-motivated politicians.
Resumo:
In 2009, the Sheffield Alcohol Research Group (SARG) at Sheffield University developed the Sheffield Alcohol Policy Model version 2.0 (SAPM) to appraise the potential impact of alcohol policies, including different levels of MUP, for the population of England. In 2013, SARG were commissioned by the DHSSPS and the Department for Social Development to adapt the Sheffield Model to NI in order to appraise the potential impact of a range of alcohol pricing policies. The present report represents the results of this work. Estimates from the Northern Ireland (NI) adaptation of the Sheffield Alcohol Policy Model - version 3 - (SAPM3) suggest: 1. Minimum Unit Pricing (MUP) policies would be effective in reducing alcohol consumption, alcohol related harms (including alcohol-related deaths, hospitalisations, crimes and workplace absences) and the costs associated with those harms. 2. A ban on below-cost selling (implemented as a ban on selling alcohol for below the cost of duty plus the VAT payable on that duty) would have a negligible impact on alcohol consumption or related harms. 3. A ban on price-based promotions in the off-trade, either alone or in tandem with an MUP policy would be effective in reducing alcohol consumption, related harms and associated costs. 4. MUP and promotion ban policies would only have a small impact on moderate drinkers at all levels of income. Somewhat larger impacts would be experienced by increasing risk drinkers, with the most substantial effects being experienced by high risk drinkers. 5. MUP and promotion ban policies would have larger impacts on those in poverty, particularly high risk drinkers, than those not in poverty. However, those in poverty also experience larger relative gains in health and are estimated to marginally reduce their spending due to their reduced drinking under the majority of policies åÊ
Resumo:
BACKGROUND: The reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) is a widely used, highly sensitive laboratory technique to rapidly and easily detect, identify and quantify gene expression. Reliable RT-qPCR data necessitates accurate normalization with validated control genes (reference genes) whose expression is constant in all studied conditions. This stability has to be demonstrated.We performed a literature search for studies using quantitative or semi-quantitative PCR in the rat spared nerve injury (SNI) model of neuropathic pain to verify whether any reference genes had previously been validated. We then analyzed the stability over time of 7 commonly used reference genes in the nervous system - specifically in the spinal cord dorsal horn and the dorsal root ganglion (DRG). These were: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) and L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and hydroxymethylbilane synthase (HMBS). We compared the candidate genes and established a stability ranking using the geNorm algorithm. Finally, we assessed the number of reference genes necessary for accurate normalization in this neuropathic pain model. RESULTS: We found GAPDH, HMBS, Actb, HPRT1 and 18S cited as reference genes in literature on studies using the SNI model. Only HPRT1 and 18S had been once previously demonstrated as stable in RT-qPCR arrays. All the genes tested in this study, using the geNorm algorithm, presented gene stability values (M-value) acceptable enough for them to qualify as potential reference genes in both DRG and spinal cord. Using the coefficient of variation, 18S failed the 50% cut-off with a value of 61% in the DRG. The two most stable genes in the dorsal horn were RPL29 and RPL13a; in the DRG they were HPRT1 and Actb. Using a 0.15 cut-off for pairwise variations we found that any pair of stable reference gene was sufficient for the normalization process. CONCLUSIONS: In the rat SNI model, we validated and ranked Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 and 18S as good reference genes in the spinal cord. In the DRG, 18S did not fulfill stability criteria. The combination of any two stable reference genes was sufficient to provide an accurate normalization.
Resumo:
This paper presents a model of electoral competition focusing on the formation of thepublic agenda. An incumbent government and a challenger party in opposition competein elections by choosing the issues that will key out their campaigns. Giving salience toan issue implies proposing an innovative policy proposal, alternative to the status-quo.Parties trade off the issues with high salience in voters concerns and those with broadagreement on some alternative policy proposal. Each party expects a higher probabilityof victory if the issue it chooses becomes salient in the voters decision. But remarkably,the issues which are considered the most important ones by a majority of votes may notbe given salience during the electoral campaign. An incumbent government may survivein spite of its bad policy performance if there is no sufficiently broad agreement on apolicy alternative. We illustrate the analytical potential of the model with the case of theUnited States presidential election in 2004.
Resumo:
This study investigated fatigue-induced changes in spring-mass model characteristics during repeated running sprints. Sixteen active subjects performed 12 × 40 m sprints interspersed with 30 s of passive recovery. Vertical and anterior-posterior ground reaction forces were measured at 5-10 m and 30-35 m and used to determine spring-mass model characteristics. Contact (P < 0.001), flight (P < 0.05) and swing times (P < 0.001) together with braking, push-off and total stride durations (P < 0.001) lengthened across repetitions. Stride frequency (P < 0.001) and push-off forces (P < 0.05) decreased with fatigue, whereas stride length (P = 0.06), braking (P = 0.08) and peak vertical forces (P = 0.17) changes approached significance. Center of mass vertical displacement (P < 0.001) but not leg compression (P > 0.05) increased with time. As a result, vertical stiffness decreased (P < 0.001) from the first to the last repetition, whereas leg stiffness changes across sprint trials were not significant (P > 0.05). Changes in vertical stiffness were correlated (r > 0.7; P < 0.001) with changes in stride frequency. When compared to 5-10 m, most of ground reaction force-related parameters were higher (P < 0.05) at 30-35 m, whereas contact time, stride frequency, vertical and leg stiffness were lower (P < 0.05). Vertical stiffness deteriorates when 40 m run-based sprints are repeated, which alters impact parameters. Maintaining faster stride frequencies through retaining higher vertical stiffness is a prerequisite to improve performance during repeated sprinting.
Resumo:
Most central banks perceive a trade-off between stabilizing inflation and stabilizing the gap between output and desired output. However, the standard new Keynesian framework implies no such trade-off. In that framework, stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap. In this paper, we argue that this property of the new Keynesian framework, which we call the divine coincidence, is due to a special feature of the model: the absence of non trivial real imperfections.We focus on one such real imperfection, namely, real wage rigidities. When the baseline new Keynesian model is extended to allow for real wage rigidities, the divine coincidence disappears, and central banks indeed face a trade-off between stabilizing inflation and stabilizing the welfare-relevant output gap. We show that not only does the extended model have more realistic normative implications, but it also has appealing positive properties. In particular, it provides a natural interpretation for the dynamic inflation-unemployment relation found in the data.
Resumo:
We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.
Resumo:
The remarkable growth of older population has moved long term care to the front ranks of the social policy agenda. Understanding the factors that determine the type and amount of formal care is important for predicting use in the future and developing long-term policy. In this context we jointly analyze the choice of care (formal, informal, both together or none) as well as the number of hours of care received. Given that the number of hours of care is not independent of the type of care received, we estimate, for the first time in this area of research, a sample selection model with the particularity that the first step is a multinomial logit model. With regard to the debate about complementarity or substitutability between formal and informal care, our results indicate that formal care acts as a reinforcement of the family care in certain cases: for very old care receivers, in those cases in which the individual has multiple disabilities, when many care hours are provided, and in case of mental illness and/or dementia. There exist substantial differences in long term care addressed to younger and older dependent people and dependent women are in risk of becoming more vulnerable to the shortage of informal caregivers in the future. Finally, we have documented that there are great disparities in the availability of public social care across regions.
Resumo:
Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.