939 resultados para Dynamic Threshold Algorithm
Resumo:
The aim of this paper is to analyse the impact of university knowledge and technology transfer activities on academic research output. Specifically, we study whether researchers with collaborative links with the private sector publish less than their peers without such links, once controlling for other sources of heterogeneity. We report findings from a longitudinal dataset on researchers from two engineering departments in the UK between 1985 until 2006. Our results indicate that researchers with industrial links publish significantly more than their peers. Academic productivity, though, is higher for low levels of industry involvement as compared to high levels.
Resumo:
Purpose: To examine the relationship of functional measurements with structural measures. Methods: 146 eyes of 83 test subjects underwent Heidelberg Retinal Tomography (HRTIII) (disc area<2.43, mphsd<40), and perimetry testing with Octopus (SAP; Dynamic), Pulsar (PP; TOP) and Moorfields MDT (ESTA). Glaucoma was defined as progressive structural or functional loss (20 eyes). Perimetry test points were grouped into 6 sectors based on the estimated optic nerve head angle into which the associated nerve fiber bundle enters (Garway-Heath map). Perimetry summary measures (PSM) (MD SAP/ MD PP/ PTD MDT) were calculated from the average total deviation of each measured threshold from the normal for each sector. We calculated the 95% significance level of the sectorial PSM from the respective normative data. We calculated the percentage agreement with group1 (G1), healthy on HRT and within normal perimetric limits, and group 2 (G2), abnormal on HRT and outside normal perimetric limits. We also examined the relationship of PSM and rim area (RA) in those sectors classified as abnormal by MRA (Moorfields Regression Analysis) of HRT. Results: The mean age was 65 (range= [37, 89]). The global sensitivity versus specificity of each instrument in detecting glaucomatous eyes was: MDT 80% vs. 88%, SAP 80% vs. 80%, PP 70% vs. 89% and HRT 80% vs. 79%. Highest percentage agreement of HRT (respectively G1, G2, sector) with PSM were MDT (89%, 57%, nasal superior), SAP (83%, 74%, temporal superior), PP (74%, 63%, nasal superior). Globally percentage agreement (respectively G1, G2) was MDT (92%, 28%), SAP (87%, 40%) and PP (77%, 49%). Linear regression showed there was no significant trend globally associating RA and PSM. However, sectorally the supero-nasal sector had a statistically significant (p<0.001) trend with each instrument, the associated r2 coefficients are (MDT 0.38 SAP 0.56 and PP 0.39). Conclusions: There were no significant differences in global sensitivity or specificity between instruments. Structure-function relationships varied significantly between instruments and were consistently strongest supero-nasally. Further studies are required to investigate these relationships in detail.
Resumo:
Protective adaptive immune responses rely on TCR-mediated recognition of Ag-derived peptides presented by self-MHC molecules. However, self-Ag (tumor)-specific TCRs are often of too low affinity to achieve best functionality. To precisely assess the relationship between TCR-peptide-MHC binding parameters and T cell function, we tested a panel of sequence-optimized HLA-A(*)0201/NY-ESO-1(157-165)-specific TCR variants with affinities lying within physiological boundaries to preserve antigenic specificity and avoid cross-reactivity, as well as two outliers (i.e., a very high- and a low-affinity TCR). Primary human CD8 T cells transduced with these TCRs demonstrated robust correlations between binding measurements of TCR affinity and avidity and the biological response of the T cells, such as TCR cell-surface clustering, intracellular signaling, proliferation, and target cell lysis. Strikingly, above a defined TCR-peptide-MHC affinity threshold (K(D) < approximately 5 muM), T cell function could not be further enhanced, revealing a plateau of maximal T cell function, compatible with the notion that multiple TCRs with slightly different affinities participate equally (codominantly) in immune responses. We propose that rational design of improved self-specific TCRs may not need to be optimized beyond a given affinity threshold to achieve both optimal T cell function and avoidance of the unpredictable risk of cross-reactivity.
Resumo:
Secondary accident statistics can be useful for studying the impact of traffic incident management strategies. An easy-to-implement methodology is presented for classifying secondary accidents using data fusion of a police accident database with intranet incident reports. A current method for classifying secondary accidents uses a static threshold that represents the spatial and temporal region of influence of the primary accident, such as two miles and one hour. An accident is considered secondary if it occurs upstream from the primary accident and is within the duration and queue of the primary accident. However, using the static threshold may result in both false positives and negatives because accident queues are constantly varying. The methodology presented in this report seeks to improve upon this existing method by making the threshold dynamic. An incident progression curve is used to mark the end of the queue throughout the entire incident. Four steps in the development of incident progression curves are described. Step one is the processing of intranet incident reports. Step two is the filling in of incomplete incident reports. Step three is the nonlinear regression of incident progression curves. Step four is the merging of individual incident progression curves into one master curve. To illustrate this methodology, 5,514 accidents from Missouri freeways were analyzed. The results show that secondary accidents identified by dynamic versus static thresholds can differ by more than 30%.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contracttheory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet.We discuss these issues in a battery of six examples.
Resumo:
We analyze the impact of countercyclical capital buffers held by banks on the supplyof credit to firms and their subsequent performance. Spain introduced dynamicprovisioning unrelated to specific bank loan losses in 2000 and modified its formulaparameters in 2005 and 2008. In each case, individual banks were impacteddifferently. The resultant bank-specific shocks to capital buffers, coupled withcomprehensive bank-, firm-, loan-, and loan application-level data, allow us toidentify its impact on the supply of credit and on real activity. Our estimates showthat countercyclical dynamic provisioning smooths cycles in the supply of credit andin bad times upholds firm financing and performance.
Resumo:
Summary Throughout my thesis, I elaborate on how real and financing frictions affect corporate decision making under uncertainty, and I explore how firms time their investment and financing decisions given such frictions. While the macroeconomics literature has focused on the impact of real frictions on investment decisions assuming all equity financed firms, the financial economics literature has mainly focused on the study of financing frictions. My thesis therefore assesses the join interaction of real and financing frictions in firms' dynamic investment and financing decisions. My work provides a rationale for the documented poor empirical performance of neoclassical investment models based on the joint effect of real and financing frictions on investment. A major observation relies in how the infrequency of corporate decisions may affect standard empirical tests. My thesis suggests that the book to market sorts commonly used in the empirical asset pricing literature have economic content, as they control for the lumpiness in firms' optimal investment policies. My work also elaborates on the effects of asymmetric information and strategic interaction on firms' investment and financing decisions. I study how firms time their decision to raise public equity when outside investors lack information about their future investment prospects. I derive areal-options model that predicts either cold or hot markets for new stock issues conditional on adverse selection, and I provide a rational approach to study jointly the market timing of corporate decisions and announcement effects in stock returns. My doctoral dissertation therefore contributes to our understanding of how under real and financing frictions may bias standard empirical tests, elaborates on how adverse selection may induce hot and cold markets in new issues' markets, and suggests how the underlying economic behaviour of firms may induce alternative patterns in stock prices.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
Several patient-related variables have already been investigated as predictors of change in psychodynamic psychotherapy. Defensive functioning is one of them. However, few studies have investigated adaptational processes, encompassing defence mechanisms and coping, from an integrative or comparative viewpoint. This study includes 32 patients, mainly diagnosed with adjustment disorder and undergoing time-limited psychodynamic psychotherapy lasting up to 40 sessions, and will focus on early change in defence and coping. Observer-rater methodology was applied to the transcripts of two sessions of the first part of the psychotherapeutic process. It is assumed that the contextual-relational variable of therapeutic alliance intervenes as moderator on change in adaptational processes. Results corroborated the hypothesis, but only for coping, whereas for defences, overall functioning remained stable over the first 20 sessions of psychotherapy. These results are discussed within the framework of disentangling processes underlying adaptation, i.e., related to issues on trait and state aspects, as well as the role of the therapeutic alliance.
Resumo:
Many revenue management (RM) industries are characterized by (a) fixed capacities in theshort term (e.g., hotel rooms, seats on an airline flight), (b) homogeneous products (e.g., twoairline flights between the same cities at similar times), and (c) customer purchasing decisionslargely influenced by price. Competition in these industries is also very high even with just twoor three direct competitors in a market. However, RM competition is not well understood andpractically all known implementations of RM software and most published models of RM donot explicitly model competition. For this reason, there has been considerable recent interestand research activity to understand RM competition. In this paper we study price competitionfor an oligopoly in a dynamic setting, where each of the sellers has a fixed number of unitsavailable for sale over a fixed number of periods. Demand is stochastic, and depending on howit evolves, sellers may change their prices at any time. This reflects the fact that firms constantly,and almost costlessly, change their prices (alternately, allocations at a price in quantity-basedRM), reacting either to updates in their estimates of market demand, competitor prices, orinventory levels. We first prove existence of a unique subgame-perfect equilibrium for a duopoly.In equilibrium, in each state sellers engage in Bertrand competition, so that the seller withthe lowest reservation value ends up selling a unit at a price that is equal to the equilibriumreservation value of the competitor. This structure hence extends the marginal-value conceptof bid-price control, used in many RM implementations, to a competitive model. In addition,we show that the seller with the lowest capacity sells all its units first. Furthermore, we extendthe results transparently to n firms and perform a number of numerical comparative staticsexploiting the uniqueness of the subgame-perfect equilibrium.