952 resultados para user testing, usability testing, system integration, thinking aloud, card sorting
Resumo:
This paper extends previous resuls on optimal insurance trading in the presence of a stock market that allows continuous asset trading and substantial personal heterogeneity, and applies those results in a context of asymmetric informationwith references to the role of genetic testing in insurance markets.We find a novel and surprising result under symmetric information:agents may optimally prefer to purchase full insurance despitethe presence of unfairly priced insurance contracts, and other assets which are correlated with insurance.Asymmetric information has a Hirschleifer-type effect whichcan be solved by suspending insurance trading. Nevertheless,agents can attain their first best allocations, which suggeststhat the practice of restricting insurance not to be contingenton genetic tests can be efficient.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
BACKGROUND: Since the advent of combined antiretroviral therapy (ART), the incidence of non-AIDS-defining cancers (non-ADCs) among HIV-positive patients is rising. We previously described HIV testing rates of <5% in our oncology centre, against a local HIV prevalence of 0.4% (1). We have since worked with the Service of Oncology to identify, how HIV testing can be optimized, we have conducted a study on investigating barriers in HIV-testing oncology patients (IBITOP) among treating oncologists and their patients. METHODS: After an initial two-month pilot study to examine feasibility (2), we conducted the first phase of the IBITOP study between 1st July and 31st October 2013. Patients of unknown HIV status, newly diagnosed with solid-organ non-AIDS-defining cancer, and treated at Lausanne University Hospital were invited to participate. Patients were offered HIV testing as a part of their initial oncology work-up. Oncologist testing proposals and patient acceptance were the primary endpoints. RESULTS: Of 235 patients with a new oncology diagnosis, 10 were excluded (7 with ADCs and 3 of known HIV-positive status). Mean age was 62 years; 48% were men and 71% were Swiss. Of 225 patients, 75 (33%) were offered HIV testing. Of these, 56 (75%) accepted, of whom 52 (93%) were tested. A further ten patients were tested (without documentation of being offered a test), which gave a total testing rate of 28% (62/225). Among the 19 patients who declined testing, reasons cited included self-perceived absence of HIV risk, previous testing and palliative care. Of the 140 patients not offered HIV testing and not tested, reasons were documented for 35 (25%), the most common being previous testing and follow-up elsewhere. None of the 62 patients HIV tested had a reactive test. CONCLUSIONS: In this study, one third of patients seen were offered testing and the HIV testing rate was fivefold higher than that of previously observed in this service. Most patients accepted testing when offered. As HIV-positive status impacts on the medical management of cancer patients, we recommend that HIV screening should be performed in settings, where HIV prevalence is >0.1%. Phase II of the IBITOP study is now underway to explore barriers to HIV screening among oncologists and patients following the updated national HIV testing guidelines which recommend testing in non-ADC patients undergoing chemotherapy.
Resumo:
One plausible mechanism through which financial market shocks may propagate across countriesis through the impact that past gains and losses may have on investors risk aversion and behavior. This paper presents a stylized model illustrating how heterogeneous changes in investors risk aversion affect portfolio allocation decisions and stock prices. Our empirical findings suggest that when funds returns are below average, they adjust their holdings toward the average (or benchmark) portfolio. In so doing, funds tend to sell the assets of countries in which they were overweight , increasing their exposure to countries in which they were underweight. Based on this insight, the paper constructs an index of financial interdependence which reflects the extent to which countries share overexposed funds. The index helps in explain the pattern of stock market comovement across countries. Moreover, a comparison of this interdependence measure to indices of trade or commercial bank linkages indicates that our index can improve predictions about which countries are more likely to be affected by contagion from crisis centers.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
The economic effects of the Protestant Reformation: Testing the Weber hypothesis in the German Lands
Resumo:
Many theories, most famously Max Weber s essay on the Protestant ethic, have hypothesizedthat Protestantism should have favored economic development. With their considerablereligious heterogeneity and stability of denominational affiliations until the 19th century, theGerman Lands of the Holy Roman Empire present an ideal testing ground for this hypothesis.Using population figures in a dataset comprising 272 cities in the years 1300 1900, I find no effectsof Protestantism on economic growth. The finding is robust to the inclusion of a varietyof controls, and does not appear to depend on data selection or small sample size. In addition,Protestantism has no effect when interacted with other likely determinants of economic development.I also analyze the endogeneity of religious choice; instrumental variables estimates ofthe effects of Protestantism are similar to the OLS results.
Resumo:
Background: Bacteria form biofilms on the surface of orthopaedic devices, causing persistent infections. Monitoring biofilm formation on bone grafts and bone substitutes is challenging due to heterogeneous surface characteristics. We analyzed various bone grafts and bone substitutes regarding their propensity for in-vitro biofilm formation caused by S. aureus and S. epidermidis. Methods: Beta-tricalciumphosphate (b-TCP, ChronOsTM), processed human spongiosa (TutoplastTM) and PMMA (PalacosTM) were investigated. PE was added as a growth control. As test strains S. aureus (ATCC 29213) and S. epidermidis RP62A (ATCC 35984) were used. Test materials were incubated with 105 cfu/ml. After 24 h, test materials were removed and washed, followed by a standardised sonication protocol. The resulting sonication fluid was plated and bacterial counts were enumerated and expressed as cfu/sample. Sonicated samples were transferred to a microcalorimeter (TA Instrument) and heat flow monitored over a 24 h period with a precision of 0.0001°C and a sensitiviy of 200 μW. Experiments were performed in triplicates to calculate the mean ± SD. One-way ANOVA analysis was used for statistical analysis. Results: Bacterial counts (log10 cfu/sample) were highest on b-TCP (S. aureus 7.67 ± 0.17; S. epidermidis 8.14 ± 0.05) while bacterial density (log10 cfu/surface) was highest on PMMA (S. aureus 6.12 ± 0.2, S. epidermidis 7.65 ± 0.13). Detection time for S. aureus biofilms was shorter for the porous materials (b-TCP and Tutoplast, p <0.001) compared to the smooth materials (PMMA and PE) with no differences between b-TCP and TutoplastTM (p >0.05) or PMMA and PE (p >0.05). In contrast, for S. epidermidis biofilms the detection time was different (p <0.001) between all materials except between Tutoplast and PE (p >0.05). Conclusion: Our results demonstrate biofilm formation with both strains on all tested materials. Microcalorimetry was able to detect quantitatively the amount of biofilm. Further studies are needed to see whether calorimetry is a suitable tool also to monitor approaches to prevent and treat infections associated with bone grafts and bone substitutes.
Resumo:
Miniature diffusion size classifiers (miniDiSC) are novel handheld devices to measure ultrafine particles (UFP). UFP have been linked to the development of cardiovascular and pulmonary diseases; thus, detection and quantification of these particles are important for evaluating their potential health hazards. As part of the UFP exposure assessments of highwaymaintenance workers in western Switzerland, we compared a miniDiSC with a portable condensation particle counter (P-TRAK). In addition, we performed stationary measurements with a miniDiSC and a scanning mobility particle sizer (SMPS) at a site immediately adjacent to a highway. Measurements with miniDiSC and P-TRAK correlated well (correlation of r = 0.84) but average particle numbers of the miniDiSC were 30%âeuro"60% higher. This difference was significantly increased for mean particle diameters below 40 nm. The correlation between theminiDiSC and the SMPSduring stationary measurements was very high (r = 0.98) although particle numbers from the miniDiSC were 30% lower. Differences between the three devices were attributed to the different cutoff diameters for detection. Correction for this size dependent effect led to very similar results across all counters.We did not observe any significant influence of other particle characteristics. Our results suggest that the miniDiSC provides accurate particle number concentrations and geometric mean diameters at traffic-influenced sites, making it a useful tool for personal exposure assessment in such settings.
Resumo:
The aim of the present study was to investigate the relative importance of flooding- and confinement-related environmentalfeatures in explaining macroinvertebrate trait structure and diversity in a pool of wetlands located in a Mediterranean riverfloodplain. To test hypothesized trait-environment relationships, we employed a recently implemented statistical procedure, thefourth-corner method. We found that flooding-related variables, mainly pH and turbidity, were related to traits that confer an abilityof the organism to resist flooding (e.g., small body-shape, protection of eggs) or recuperate faster after flooding (e.g., short life-span, asexual reproduction). In contrast, confinement-related variables, mainly temperature and organic matter, enhanced traits that allow organisms to interact and compete with other organisms (e.g., large size, sexual reproduction) and to efficiently use habitat and resources (e.g., diverse locomotion and feeding strategies). These results are in agreement with predictions made under the River Habitat Templet for lotic ecosystems, and demonstrate the ability of the fourth-corner method to test hypothesis that posit traitenvironment relationships. Trait diversity was slightly higher in flooded than in confined sites, whereas trait richness was not significantly different. This suggests that although trait structure may change in response to the main environmental factors, as evidenced by the fourth-corner method, the number of life-history strategies needed to persist in the face of such constraints remains more or less constant; only their relative dominance differs
The cost of inappropriateness of coagulation testing [I costi dell'inappropriatezza in coagulazione]
Resumo:
Background. Laboratory utilization has steadily increased with a corresponding increase in overall costs; several authors have attempted to measure the impact of inappropriateness on clinical outcomes but data are insufficient. The aim of the study is to assess the cost of inappropriateness of test-ordering behaviour for second-level coagulation tests (hemorrhagic diathesisand thrombophilia). Methods. We reviewed all second-level coagulation testrequests received by our department during a six months period. Clinicians must fill out a specific order form for these kind of tests, containing all informations deemed necessary for the laboratory specialist to evaluatethe appropriateness of the request. We identified all inappropriate requests and counted the numbers and types of all coagulation tests that were not performed during the period. An analysis of the laboratory activity costs was done in order to calculate the global costof each test in our department and to estimate the savings achieved. Results. On a total of 1664 second-level coagulationtest requests, we estimated 150 as completely inappropriate. We found an overall of 295 inappropriate testswhich were not performed. This resulted in an economic saving of 20.000 euro in 6 months. Conclusions. The analysis of cost of our intervention shows the urgent need for a definite and sustained reduction in inappropriate requests of second-level coagulation tests. Even though we estimated only the economic aspect of inappropriate testing, this is also associated with the overuse of diagnostic tests which entailsthe risk of generating erroneous results with potentialnegative consequences on patients' health.
Resumo:
[ANGLÈS] This project introduces GNSS-SDR, an open source Global Navigation Satellite System software-defined receiver. The lack of reconfigurability of current commercial-of-the-shelf receivers and the advent of new radionavigation signals and systems make software receivers an appealing approach to design new architectures and signal processing algorithms. With the aim of exploring the full potential of this forthcoming scenario with a plurality of new signal structures and frequency bands available for positioning, this paper describes the software architecture design and provides details about its implementation, targeting a multiband, multisystem GNSS receiver. The result is a testbed for GNSS signal processing that allows any kind of customization, including interchangeability of signal sources, signal processing algorithms, interoperability with other systems, output formats, and the offering of interfaces to all the intermediate signals, parameters and variables. The source code release under the GNU General Public License (GPL) secures practical usability, inspection, and continuous improvement by the research community, allowing the discussion based on tangible code and the analysis of results obtained with real signals. The source code is complemented by a development ecosystem, consisting of a website (http://gnss-sdr.org), as well as a revision control system, instructions for users and developers, and communication tools. The project shows in detail the design of the initial blocks of the Signal Processing Plane of the receiver: signal conditioner, the acquisition block and the receiver channel, the project also extends the functionality of the acquisition and tracking modules of the GNSS-SDR receiver to track the new Galileo E1 signals available. Each section provides a theoretical analysis, implementation details of each block and subsequent testing to confirm the calculations with both synthetically generated signals and with real signals from satellites in space.