809 resultados para Performance model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine and compare the diagnostic performance of magnetic resonance imaging (MRI) and computed tomography (CT) for the diagnosis of tumor extent in advanced retinoblastoma, using histopathologic analysis as the reference standard. DESIGN: Systematic review and meta-analysis. PARTICIPANTS: Patients with advanced retinoblastoma who underwent MRI, CT, or both for the detection of tumor extent from published diagnostic accuracy studies. METHODS: Medline and Embase were searched for literature published through April 2013 assessing the diagnostic performance of MRI, CT, or both in detecting intraorbital and extraorbital tumor extension of retinoblastoma. Diagnostic accuracy data were extracted from included studies. Summary estimates were based on a random effects model. Intrastudy and interstudy heterogeneity were analyzed. MAIN OUTCOME MEASURES: Sensitivity and specificity of MRI and CT in detecting tumor extent. RESULTS: Data of the following tumor-extent parameters were extracted: anterior eye segment involvement and ciliary body, optic nerve, choroidal, and (extra)scleral invasion. Articles on MRI reported results of 591 eyes from 14 studies, and articles on CT yielded 257 eyes from 4 studies. The summary estimates with their 95% confidence intervals (CIs) of the diagnostic accuracy of conventional MRI at detecting postlaminar optic nerve, choroidal, and scleral invasion showed sensitivities of 59% (95% CI, 37%-78%), 74% (95% CI, 52%-88%), and 88% (95% CI, 20%-100%), respectively, and specificities of 94% (95% CI, 84%-98%), 72% (95% CI, 31%-94%), and 99% (95% CI, 86%-100%), respectively. Magnetic resonance imaging with a high (versus a low) image quality showed higher diagnostic accuracies for detection of prelaminar optic nerve and choroidal invasion, but these differences were not statistically significant. Studies reporting the diagnostic accuracy of CT did not provide enough data to perform any meta-analyses. CONCLUSIONS: Magnetic resonance imaging is an important diagnostic tool for the detection of local tumor extent in advanced retinoblastoma, although its diagnostic accuracy shows room for improvement, especially with regard to sensitivity. With only a few-mostly old-studies, there is very little evidence on the diagnostic accuracy of CT, and generally these studies show low diagnostic accuracy. Future studies assessing the role of MRI in clinical decision making in terms of prognostic value for advanced retinoblastoma are needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of the classical compound Poisson process in collective risk theory, we study a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed. This model contains both the continuous-time and the discrete-time risk model as a limit and represents a certain type of bridge between them which still enables the explicit calculation of moments of total discounted dividend payments until ruin. Numerical illustrations for several sets of parameters are given and the effect of random observation times on the performance of the dividend strategy is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most evident symptoms of schizophrenia are severe impairment of cognitive functions like attention, abstract reasoning and working memory. The latter has been defined as the ability to maintain and manipulate on-line a limited amount of information. Whereas several studies show that working memory processes are impaired in schizophrenia, the specificity of this deficit is still unclear. Results obtained with a new paradigm, involving visuospatial, dynamic and static working memory processing, suggest that schizophrenic patients rely on a specific compensatory strategy. An animal model of schizophrenia with a transient deficit in glutathione during the development reveals similar substitutive processing, masking the impairment in working memory functions in specific test conditions only. Taken together, these results show coherence between working memory deficits in schizophrenic patients and in animal models. More generally, it is possible to consider that the pathological state may be interpreted as a reduced homeostatic reserve. However, this may be balanced in specific situations by efficient allostatic strategies. Thus, the pathological condition would remain latent in several situations, due to such allostatic regulations. However, to maintain a performance based on highly specific strategies requires in turn specific conditions, limitating adaptative resources in humans and in animals. In summary, we suggest that the psychological and physical load to maintain this rigid allostatic state is very high in patients and animal subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this study was to determine whether V˙O(2) kinetics and specifically, the time constant of transitions from rest to heavy (τ(p)H) and severe (τ(p)S) exercise intensities, are related to middle distance swimming performance. DESIGN: Fourteen highly trained male swimmers (mean ± SD: 20.5 ± 3.0 yr; 75.4 ± 12.4 kg; 1.80 ± 0.07 m) performed an discontinuous incremental test, as well as square wave transitions for heavy and severe swimming intensities, to determine V˙O(2) kinetics parameters using two exponential functions. METHODS: All the tests involved front-crawl swimming with breath-by-breath analysis using the Aquatrainer swimming snorkel. Endurance performance was recorded as the time taken to complete a 400 m freestyle swim within an official competition (T400), one month from the date of the other tests. RESULTS: T400 (Mean ± SD) (251.4 ± 12.4 s) was significantly correlated with τ(p)H (15.8 ± 4.8s; r=0.62; p=0.02) and τ(p)S (15.8 ± 4.7s; r=0.61; p=0.02). The best single predictor of 400 m freestyle time, out of the variables that were assessed, was the velocity at V˙O(2max)vV˙O(2max), which accounted for 80% of the variation in performance between swimmers. However, τ(p)H and V˙O(2max) were also found to influence the prediction of T400 when they were included in a regression model that involved respiratory parameters only. CONCLUSIONS: Faster kinetics during the primary phase of the V˙O(2) response is associated with better performance during middle-distance swimming. However, vV˙O(2max) appears to be a better predictor of T400.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doxorubicin is an antineoplasic agent active against sarcoma pulmonary metastasis, but its clinical use is hampered by its myelotoxicity and its cumulative cardiotoxicity, when administered systemically. This limitation may be circumvented using the isolated lung perfusion (ILP) approach, wherein a therapeutic agent is infused locoregionally after vascular isolation of the lung. The influence of the mode of infusion (anterograde (AG): through the pulmonary artery (PA); retrograde (RG): through the pulmonary vein (PV)) on doxorubicin pharmacokinetics and lung distribution was unknown. Therefore, a simple, rapid and sensitive high-performance liquid chromatography method has been developed to quantify doxorubicin in four different biological matrices (infusion effluent, serum, tissues with low or high levels of doxorubicin). The related compound daunorubicin was used as internal standard (I.S.). Following a single-step protein precipitation of 500 microl samples with 250 microl acetone and 50 microl zinc sulfate 70% aqueous solution, the obtained supernatant was evaporated to dryness at 60 degrees C for exactly 45 min under a stream of nitrogen and the solid residue was solubilized in 200 microl of purified water. A 100 microl-volume was subjected to HPLC analysis onto a Nucleosil 100-5 microm C18 AB column equipped with a guard column (Nucleosil 100-5 microm C(6)H(5) (phenyl) end-capped) using a gradient elution of acetonitrile and 1-heptanesulfonic acid 0.2% pH 4: 15/85 at 0 min-->50/50 at 20 min-->100/0 at 22 min-->15/85 at 24 min-->15/85 at 26 min, delivered at 1 ml/min. The analytes were detected by fluorescence detection with excitation and emission wavelength set at 480 and 550 nm, respectively. The calibration curves were linear over the range of 2-1000 ng/ml for effluent and plasma matrices, and 0.1 microg/g-750 microg/g for tissues matrices. The method is precise with inter-day and intra-day relative standard deviation within 0.5 and 6.7% and accurate with inter-day and intra-day deviations between -5.4 and +7.7%. The in vitro stability in all matrices and in processed samples has been studied at -80 degrees C for 1 month, and at 4 degrees C for 48 h, respectively. During initial studies, heparin used as anticoagulant was found to profoundly influence the measurements of doxorubicin in effluents collected from animals under ILP. Moreover, the strong matrix effect observed with tissues samples indicate that it is mandatory to prepare doxorubicin calibration standard samples in biological matrices which would reflect at best the composition of samples to be analyzed. This method was successfully applied in animal studies for the analysis of effluent, serum and tissue samples collected from pigs and rats undergoing ILP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta pesquisa gira à volta da avaliação do desempenho organizacional com enfoque no sistema denominado Balanced Scorecard. Esta ferramenta, criada no início da década de noventa, por David Norton e Robert Kaplan, tem vindo a contagiar os gestores, e nos dias de hoje várias são as organizações que beneficiam dela para obter excelência. A primeira metodologia foi apresentada em mil novecentos e noventa e três (1993), constituída por oito etapas. No ano de mil novecentos e noventa e seis (1996), os autores desenvolveram uma nova metodologia, melhorada, composta por dez etapas. Começámos por fazer um levantamento teórico dos conceitos ligados a esta ferramenta, as suas vantagens e desvantagens, as fases da sua execução, os possíveis obstáculos ao seu sucesso e os frutos que poderão ser colhidos com a sua implementação. Através de uma proposta de implementação, escolhemos o Comando da 1ª Região Militar, para verificar quais serão os impactos na gestão desta organização. Do diagnóstico situacional efectuado com base em entrevistas, análise documental e observação, verificámos que a organização possui algumas insuficiências ao nível do desempenho de gestão, derivadas sobretudo da situação logística e financeira. Na construção do mapa estratégico, principal componente do Balanced scorecard, vimo-nos na necessidade de deslocar a perspectiva do cliente ou de mercado para o topo de configuração, devido à natureza do objecto negocial da organização em estudo. O modelo de avaliação de desempenho desenvolvido evidenciou a importância que a utilização deste sistema poderá ter na melhoria das actividades castrenses, sobretudo pelo aumento do nível de comunicação entre os subgrupos e a gestão de topo, neste caso, o Pessoal de Comando e as Pequenas Unidades, devido à natureza e qualidade das informações fornecidas pelo mapa estratégico. The aim of this study is to look at the organizational performance measurement system, with a special emphasis upon the so called Balanced Scorecard System. This tool, created at the beginning of the 1990’s by David Norton and Robert Kaplan, has been gaining the enthusiasm of administrator, and at the present time, several organizations are using it in the search for excellence. The first methodology was presented in 1993 and was formed by eight steps. In 1996, however, its creators developed an improved version of this methodology, now composed by ten steps. We start by doing a research of the theoretical concepts related to this tool, its advantages and disadvantages, the stages of its implementation, possible obstacles to its success, and the benefits that can come from its use. Based on an implementation proposal, we chose the First Military Command Region of Cape Verde to study the possible impacts of this system on the management of that Institution. From an investigation on the existent situation, based on interviews, analysis of documents and “in locus” observation, we realised that the institution shows some administrative insufficiencies, mainly due to its logistics and financial situation. In the building of the strategic map, the main component of the Balanced Scorecard System, we were obliged to move the perspective of the client or the market to the top of the configuration, because of the nature of the trading object of the institution being studied. The performance measurement model developed, clearly showed the importance that the implementation of this system might have on the improvement of the Military activities, mainly because of the improvement on the type of communication that can be established between the subgroups and the higher hierarchical levels, in this case, the Commander Staff and the lower Units, due to the type and quality of the information provided by the strategic map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a model of electoral competition focusing on the formation of thepublic agenda. An incumbent government and a challenger party in opposition competein elections by choosing the issues that will key out their campaigns. Giving salience toan issue implies proposing an innovative policy proposal, alternative to the status-quo.Parties trade off the issues with high salience in voters concerns and those with broadagreement on some alternative policy proposal. Each party expects a higher probabilityof victory if the issue it chooses becomes salient in the voters decision. But remarkably,the issues which are considered the most important ones by a majority of votes may notbe given salience during the electoral campaign. An incumbent government may survivein spite of its bad policy performance if there is no sufficiently broad agreement on apolicy alternative. We illustrate the analytical potential of the model with the case of theUnited States presidential election in 2004.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated fatigue-induced changes in spring-mass model characteristics during repeated running sprints. Sixteen active subjects performed 12 × 40 m sprints interspersed with 30 s of passive recovery. Vertical and anterior-posterior ground reaction forces were measured at 5-10 m and 30-35 m and used to determine spring-mass model characteristics. Contact (P < 0.001), flight (P < 0.05) and swing times (P < 0.001) together with braking, push-off and total stride durations (P < 0.001) lengthened across repetitions. Stride frequency (P < 0.001) and push-off forces (P < 0.05) decreased with fatigue, whereas stride length (P = 0.06), braking (P = 0.08) and peak vertical forces (P = 0.17) changes approached significance. Center of mass vertical displacement (P < 0.001) but not leg compression (P > 0.05) increased with time. As a result, vertical stiffness decreased (P < 0.001) from the first to the last repetition, whereas leg stiffness changes across sprint trials were not significant (P > 0.05). Changes in vertical stiffness were correlated (r > 0.7; P < 0.001) with changes in stride frequency. When compared to 5-10 m, most of ground reaction force-related parameters were higher (P < 0.05) at 30-35 m, whereas contact time, stride frequency, vertical and leg stiffness were lower (P < 0.05). Vertical stiffness deteriorates when 40 m run-based sprints are repeated, which alters impact parameters. Maintaining faster stride frequencies through retaining higher vertical stiffness is a prerequisite to improve performance during repeated sprinting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the link between brand performance and cultural primes in high-risk,innovation-based sectors. In theory section, we propose that the level of cultural uncertaintyavoidance embedded in a firm determine its marketing creativity by increasing the complexityand the broadness of a brand. It determines also the rate of firm product innovations.Marketing creativity and product innovation influence finally the firm marketingperformance. Empirically, we study trademarked promotion in the Software Security Industry(SSI). Our sample consists of 87 firms that are active in SSI from 11 countries in the period1993-2000. We use the data coming from SSI-related trademarks registered by these firms,ending up with 2,911 SSI-related trademarks and a panel of 18,213 observations. We estimatea two stage model in which first we predict the complexity and the broadness of a trademarkas a measure of marketing creativity and the rate of product innovations. Among severalcontrol variables, our variable of theoretical interest is the Hofstede s uncertainty avoidancecultural index. Then, we estimate the trademark duration with a hazard model using thepredicted complexity and broadness as well as the rate of product innovations, along with thesame control variables. Our evidence confirms that the cultural avoidance affects the durationof the trademarks through the firm marketing creativity and product innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.