30 resultados para cold sensitivity
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Recently there has been a renewed research interest in the properties of non survey updates of input-output tables and social accounting matrices (SAM). Along with the venerable and well known scaling RAS method, several alternative new procedures related to entropy minimization and other metrics have been suggested, tested and used in the literature. Whether these procedures will eventually substitute or merely complement the RAS approach is still an open question without a definite answer. The performance of many of the updating procedures has been tested using some kind of proximity or closeness measure to a reference input-output table or SAM. The first goal of this paper, in contrast, is the proposal of checking the operational performance of updating mechanisms by way of comparing the simulation results that ensue from adopting alternative databases for calibration of a reference applied general equilibrium model. The second goal is to introduce a new updatin! g procedure based on information retrieval principles. This new procedure is then compared as far as performance is concerned to two well-known updating approaches: RAS and cross-entropy. The rationale for the suggested cross validation is that the driving force for having more up to date databases is to be able to conduct more current, and hopefully more credible, policy analyses.
Resumo:
We conduct a sensitivity analysis of several estimators related to household income, to explore how some details of the definitions of the variables concerned influence the values of the common estimates, such as the mean, median and (poverty) rates. The purpose of this study is to highlight that some of the operational definitions entail an element of arbitrariness which leaves an undesirable stamp on the inferences made. The analyses use both a cross-sectional and a longitudinal (panel) component of the EU-SILC database.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
L`electrocardiograma és la primera eina diagnòstica fàcilment disponible per la detecció de l´infart a la práctica clínica. El seu valor va ser donat principalment amb estudis antics anatomopatològics. La ressonància magnètica cardíaca actualment és la tècnica d`elecció per la detecció de l`infart. Aquest estudi investiga el valor de l`electrocardiograma ( sensibilitat i especificitat) per detectar infarts de la zona anteroseptal. Conclusiò: la sensibilitat y la especificitat de quatre patents electrocardiogràfiques de la zona anteroseptal va ser valorada. Així mateix, encara que s`observin extenses ones Q en les derivacions anteriors la necrosis és usualment limitada si VL no está afectat. 3
Resumo:
PROPÒSIT: Estudiar l'efecte de la cirurgia LASIK en la llum dispersa i la sensibilitat al contrast. MÈTODES: Vint-i-vuit pacients van ser tractats amb LASIK. La qualitat visual es va avaluar abans de l'operació i dos mesos després. RESULTATS: La mitjana de llum dispersa i la sensibilitat al contrast abans de l'operació no va canviar en dos mesos després. Només un ull tenia un marcat augment en la llum dispersa. Nou ulls van presentar una lleugera disminució en la sensibilitat al contrast. S'han trobat dues complicacions. CONCLUSIÓ: Després de LASIK la majoria dels pacients (80%) no van tenir complicacions i van mantenir la seva qualitat visual. Uns pocs pacients (16%) van tenir una mica de qualitat visual disminuïda. Molt pocs (4%) van tenir complicacions clíniques amb disminució en la qualitat visual.
Resumo:
One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.
Resumo:
Functional RNA structures play an important role both in the context of noncoding RNA transcripts as well as regulatory elements in mRNAs. Here we present a computational study to detect functional RNA structures within the ENCODE regions of the human genome. Since structural RNAs in general lack characteristic signals in primary sequence, comparative approaches evaluating evolutionary conservation of structures are most promising. We have used three recently introduced programs based on either phylogenetic–stochastic context-free grammar (EvoFold) or energy directed folding (RNAz and AlifoldZ), yielding several thousand candidate structures (corresponding to ∼2.7% of the ENCODE regions). EvoFold has its highest sensitivity in highly conserved and relatively AU-rich regions, while RNAz favors slightly GC-rich regions, resulting in a relatively small overlap between methods. Comparison with the GENCODE annotation points to functional RNAs in all genomic contexts, with a slightly increased density in 3′-UTRs. While we estimate a significant false discovery rate of ∼50%–70% many of the predictions can be further substantiated by additional criteria: 248 loci are predicted by both RNAz and EvoFold, and an additional 239 RNAz or EvoFold predictions are supported by the (more stringent) AlifoldZ algorithm. Five hundred seventy RNAz structure predictions fall into regions that show signs of selection pressure also on the sequence level (i.e., conserved elements). More than 700 predictions overlap with noncoding transcripts detected by oligonucleotide tiling arrays. One hundred seventy-five selected candidates were tested by RT-PCR in six tissues, and expression could be verified in 43 cases (24.6%).
Resumo:
In experiments with two-person sequential games we analyzewhether responses to favorable and unfavorable actions dependon the elicitation procedure. In our hot treatment thesecond player responds to the first player s observed actionwhile in our cold treatment we follow the strategy method and have the second player decide on a contingent action foreach and every possible first player move, without firstobserving this move. Our analysis centers on the degree towhich subjects deviate from the maximization of their pecuniaryrewards, as a response to others actions. Our results show nodifference in behavior between the two treatments. We also findevidence of the stability of subjects preferences with respectto their behavior over time and to the consistency of theirchoices as first and second mover.
Resumo:
Low corporate taxes can help attract new firms. This is the main mechanism underpinning the standard 'race-to-the-bottom'view of tax competition. A recent theoretical literature has qualified this view by formalizing the argument that agglomeration forces can reduce firms' sensitivity to tax differentials across locations. We test this proposition using data on firm startups across Swiss municipalities. We find that, on average, high corporate income taxes do deter new firms, but that this relationship is significantly weaker in the most spatially concentrated sectors. Location choices of firms in sectors with an agglomeration intensity at the twentieth percentile of the sample distribution are estimated to be twice as responsive to a given difference in local corporate tax burdens as firms in sectors with an agglomeration intensity at the eightieth percentile. Hence, our analysis confirms the theoretical prediction: agglomeration economies can neutralize the impact of tax differentials on firms' location choices.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
The criterion, based on the thermodynamics theory, that the climatic system tends to extremizesome function has suggested several studies. In particular, special attention has been devoted to the possibility that the climate reaches an extremal rate of planetary entropy production.Due to both radiative and material effects contribute to total planetary entropy production,climatic simulations obtained at the extremal rates of total, radiative or material entropy production appear to be of interest in order to elucidate which of the three extremal assumptions behaves more similar to current data. In the present paper, these results have been obtainedby applying a 2-dimensional (2-Dim) horizontal energy balance box-model, with a few independent variables (surface temperature, cloud-cover and material heat fluxes). In addition, climatic simulations for current conditions by assuming a fixed cloud-cover have been obtained. Finally,sensitivity analyses for both variable and fixed cloud models have been carried out
Resumo:
All ontogenetic stages of a life cycle are exposed to environmental conditions so that population persistence depends on the performance of both adults and offspring. Most studies analysing the influence of abiotic conditions on species performance have focussed on adults, while studies covering early life-history stages remain rare. We investigated the responses of early stages of two widely introduced ascidians, Styela plicata and Microcosmus squamiger, to different abiotic conditions. Stressors mimicked conditions in the habitats where both species can be found in their distributional ranges and responses were related to the selection potential of their populations by analysing their genetic diversity. Four developmental stages (egg fertilisation, larval development, settlement, metamorphosis) were studied after exposure to high temperature (30°C), low salinities (26 and 22 ) and high copper concentrations (25, 50 and 100 µg/L). Although most stressors effectively led to failure of complete development (fertilisation through metamorphosis), fertilisation and larval development were the most sensitive stages. All the studied stressors affected the development of both species, though responses differed with stage and stressor. S. plicata was overall more resistant to copper, and some stages of M. squamiger to low salinities. No relationship was found between parental genetic composition and responses to stressors. We conclude that successful development can be prevented at several life-history stages, and therefore, it is essential to consider multiple stages when assessing species' abilities to tolerate stress. Moreover, we found that early development of these species cannot be completed under conditions prevailing where adults live. These populations must therefore recruit from elsewhere or reproduce during temporal windows of more benign conditions. Alternatively, novel strategies or behaviours that increase overall reproductive success might be responsible for ensuring population survival.
Resumo:
Inductive-based devices integrated with Si technology for biodetection applications are characterized, using simple resonant differential filter configurations. This has allowed the corroboration of the viability of the proposed circuits, which are characterized by their very high simplicity, for microinductive signal conditioning in high-sensitivity sensor devices. The simulation of these simple circuits predicts sensitivities of the differential output voltage which can achieve values in the range of 0.1-1 V/nH, depending on the coil parameters. These very high-sensitivity values open the possibility for the experimental detection of extremely small inductance changes in the devices. For real microinductive devices, both series resistance and parasitic capacitive components contribute to the decrease of the differential circuit sensitivity. Nevertheless, measurements performed using micro-coils fabricated with relatively high series resistance and coupling parasitic effects have allowed detection of changes in the range of 2 nH. which are compatible with biodetection applications with estimated detection limits below the picomolarity range.