993 resultados para Quantitative estimates
Resumo:
Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time perception.
Resumo:
Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.
Resumo:
High-resolution quantitative computed tomography (HRQCT)-based analysis of spinal bone density and microstructure, finite element analysis (FEA), and DXA were used to investigate the vertebral bone status of men with glucocorticoid-induced osteoporosis (GIO). DXA of L1–L3 and total hip, QCT of L1–L3, and HRQCT of T12 were available for 73 men (54.6±14.0years) with GIO. Prevalent vertebral fracture status was evaluated on radiographs using a semi-quantitative (SQ) score (normal=0 to severe fracture=3), and the spinal deformity index (SDI) score (sum of SQ scores of T4 to L4 vertebrae). Thirty-one (42.4%) subjects had prevalent vertebral fractures. Cortical BMD (Ct.BMD) and thickness (Ct.Th), trabecular BMD (Tb.BMD), apparent trabecular bone volume fraction (app.BV/TV), and apparent trabecular separation (app.Tb.Sp) were analyzed by HRQCT. Stiffness and strength of T12 were computed by HRQCT-based nonlinear FEA for axial compression, anterior bending and axial torsion. In logistic regressions adjusted for age, glucocorticoid dose and osteoporosis treatment, Tb.BMD was most closely associated with vertebral fracture status (standardized odds ratio [sOR]: Tb.BMD T12: 4.05 [95% CI: 1.8–9.0], Tb.BMD L1–L3: 3.95 [1.8–8.9]). Strength divided by cross-sectional area for axial compression showed the most significant association with spine fracture status among FEA variables (2.56 [1.29–5.07]). SDI was best predicted by a microstructural model using Ct.Th and app.Tb.Sp (r2=0.57, p<0.001). Spinal or hip DXA measurements did not show significant associations with fracture status or severity. In this cross-sectional study of males with GIO, QCT, HRQCT-based measurements and FEA variables were superior to DXA in discriminating between patients of differing prevalent vertebral fracture status. A microstructural model combining aspects of cortical and trabecular bone reflected fracture severity most accurately.
Resumo:
The present study investigated the influence of wrinkles on facial age judgments. In Experiment 1, preadolescents, young adults, and middle-aged adults made categorical age judgments for male and female faces. The qualitative (type of wrinkle) and quantitative (density of wrinkles and depth of furrows) contributions of wrinkles were analyzed. Results indicated that the greater the number of wrinkles and the depth of furrows, the older a face was rated. The roles of the gender of the face and the age of the participants were discussed. In Experiment 2, participants performed relative age judgments by comparing pairs of faces. Results revealed that the number of wrinkles had more influence on the perceived facial age than the type of wrinkle. A MDS analysis showed the main dimensions on which participants based their judgments, namely, the number of wrinkles and the depth of furrows. We conclude that the quantitative component is more likely to increase perceived facial age. Nevertheless, other variables, such as the gender of the face and the age of the participants, also seem to be involved in the age estimation process.
Resumo:
Whether contemporary human populations are still evolving as a result of natural selection has been hotly debated. For natural selection to cause evolutionary change in a trait, variation in the trait must be correlated with fitness and be genetically heritable and there must be no genetic constraints to evolution. These conditions have rarely been tested in human populations. In this study, data from a large twin cohort were used to assess whether selection Will cause a change among women in contemporary Western population for three life-history traits: age at menarche, age at first reproduction, and age at menopause. We control for temporal variation in fecundity (the baby boom phenomenon) and differences between women in educational background and religious affiliation. University-educated women have 35% lower fitness than those with less than seven years education, and Roman Catholic women have about 20% higher fitness than those of other religions. Although these differences were significant, education and religion only accounted for 2% and 1% of variance in fitness, respectively. Using structural equation modeling, we reveal significant genetic influences for all three life-history traits, with heritability estimates of 0.50, 0.23, and 0.45, respectively. However, strong genetic covariation with reproductive fitness could only be demonstrated for age at first reproduction, with much weaker covariation for age at menopause and no significant covariation for age at menarche. Selection may, therefore, lead to the evolution of earlier age at first reproduction in this population. We also estimate substantial heritable variation in fitness itself, with approximately 39% of the variance attributable to additive genetic effects, the remainder consisting of unique environmental effects and small effects from education and religion. We discuss mechanisms that could be maintaining such a high heritability for fitness. Most likely is that selection is now acting on different traits from which it did in pre-industrial human populations.
Resumo:
The Corporate world is becoming more and more competitive. This leads organisations to adapt to this reality, by adopting more efficient processes, which result in a decrease in cost as well as an increase of product quality. One of these processes consists in making proposals to clients, which necessarily include a cost estimation of the project. This estimation is the main focus of this project. In particular, one of the goals is to evaluate which estimation models fit the Altran Portugal software factory the most, the organization where the fieldwork of this thesis will be carried out. There is no broad agreement about which is the type of estimation model more suitable to be used in software projects. Concerning contexts where there is plenty of objective information available to be used as input to an estimation model, model-based methods usually yield better results than the expert judgment. However, what happens more frequently is not having this volume and quality of information, which has a negative impact in the model-based methods performance, favouring the usage of expert judgement. In practice, most organisations use expert judgment, making themselves dependent on the expert. A common problem found is that the performance of the expert’s estimation depends on his previous experience with identical projects. This means that when new types of projects arrive, the estimation will have an unpredictable accuracy. Moreover, different experts will make different estimates, based on their individual experience. As a result, the company will not directly attain a continuous growing knowledge about how the estimate should be carried. Estimation models depend on the input information collected from previous projects, the size of the project database and the resources available. Altran currently does not store the input information from previous projects in a systematic way. It has a small project database and a team of experts. Our work is targeted to companies that operate in similar contexts. We start by gathering information from the organisation in order to identify which estimation approaches can be applied considering the organization’s context. A gap analysis is used to understand what type of information the company would have to collect so that other approaches would become available. Based on our assessment, in our opinion, expert judgment is the most adequate approach for Altran Portugal, in the current context. We analysed past development and evolution projects from Altran Portugal and assessed their estimates. This resulted in the identification of common estimation deviations, errors, and patterns, which lead to the proposal of metrics to help estimators produce estimates leveraging past projects quantitative and qualitative information in a convenient way. This dissertation aims to contribute to more realistic estimates, by identifying shortcomings in the current estimation process and supporting the self-improvement of the process, by gathering as much relevant information as possible from each finished project.
Resumo:
It is well known that, unless worker-firm match quality is controlled for, returns to firm tenure (RTT) estimated directly via reduced form wage (Mincer) equations will be biased. In this paper we show that even if match quality is properly controlled for there is a further pervasive source of bias, namely the co-movement of firm employment and firm wages. In a simple mechanical model where human capital is absent and separation is exogenous we show that positively covarying shocks (either aggregate or firm level) to firms employment and wages cause downward bias in OLS regression estimates of RTT. We show that the long established procedures for dealing with "traditional" RTT bias do not circumvent the additional problem we have identified. We argue that if a reduced form estimation of RTT is undertaken, firm-year fixed effects must be added in order to eliminate this bias. Estimates from two large panel datasets from Portugal and Germany show that the bias is empirically important. Adding firm-year fixed effects to the regression increases estimates of RTT in the two respective countries by between 3.5% and 4.5% of wages at 20 years of tenure over 80% (50%) of the estimated RTT level itself. The results extend to tenure correlates used in macroeconomics such as the minimum unemployment rate since joining the firm. Adding firm-year fixed effects changes estimates of these effects also.
Resumo:
The World Health Organization (WHO) criteria for the diagnosis of osteoporosis are mainly applicable for dual X-ray absorptiometry (DXA) measurements at the spine and hip levels. There is a growing demand for cheaper devices, free of ionizing radiation such as promising quantitative ultrasound (QUS). In common with many other countries, QUS measurements are increasingly used in Switzerland without adequate clinical guidelines. The T-score approach developed for DXA cannot be applied to QUS, although well-conducted prospective studies have shown that ultrasound could be a valuable predictor of fracture risk. As a consequence, an expert committee named the Swiss Quality Assurance Project (SQAP, for which the main mission is the establishment of quality assurance procedures for DXA and QUS in Switzerland) was mandated by the Swiss Association Against Osteoporosis (ASCO) in 2000 to propose operational clinical recommendations for the use of QUS in the management of osteoporosis for two QUS devices sold in Switzerland. Device-specific weighted "T-score" based on the risk of osteoporotic hip fractures as well as on the prediction of DXA osteoporosis at the hip, according to the WHO definition of osteoporosis, were calculated for the Achilles (Lunar, General Electric, Madison, Wis.) and Sahara (Hologic, Waltham, Mass.) ultrasound devices. Several studies (totaling a few thousand subjects) were used to calculate age-adjusted odd ratios (OR) and area under the receiver operating curve (AUC) for the prediction of osteoporotic fracture (taking into account a weighting score depending on the design of the study involved in the calculation). The ORs were 2.4 (1.9-3.2) and AUC 0.72 (0.66-0.77), respectively, for the Achilles, and 2.3 (1.7-3.1) and 0.75 (0.68-0.82), respectively, for the Sahara device. To translate risk estimates into thresholds for clinical application, 90% sensitivity was used to define low fracture and low osteoporosis risk, and a specificity of 80% was used to define subjects as being at high risk of fracture or having osteoporosis at the hip. From the combination of the fracture model with the hip DXA osteoporotic model, we found a T-score threshold of -1.2 and -2.5 for the stiffness (Achilles) determining, respectively, the low- and high-risk subjects. Similarly, we found a T-score at -1.0 and -2.2 for the QUI index (Sahara). Then a screening strategy combining QUS, DXA, and clinical factors for the identification of women needing treatment was proposed. The application of this approach will help to minimize the inappropriate use of QUS from which the whole field currently suffers.
Resumo:
When Bank of England (and the Federal Reserve Board) introduced their quantitative easing (QE) operations they emphasised the effects on money and credit, but much of their empirical research on the effects of QE focuses on long-term interest rates. We use a flow of funds matrix with an independent central bank to show the implications of QE and other monetary developments, and argue that the financial crisis, the fiscal expansion and QE are likely to have constituted major exogenous shocks to money and credit in the UK which could not be digested immediately by the usual adjustment mechanisms. We present regressions of a reduced form model which considers the growth of nominal spending as determined by the growth of nominal money and other variables. These results suggest that money was not important during the Great Moderation but has had a much larger role in the period of the crisis and QE. We then use these estimates to illustrate the effects of the financial crisis and QE. We conclude that it would be useful to incorporate money and/or credit in wider macroeconometric models of the UK economy.
Resumo:
Meta-analysis of prospective studies shows that quantitative ultrasound of the heel using validated devices predicts risk of different types of fracture with similar performance across different devices and in elderly men and women. These predictions are independent of the risk estimates from hip DXA measures.Introduction Clinical utilisation of heel quantitative ultrasound (QUS) depends on its power to predict clinical fractures. This is particularly important in settings that have no access to DXA-derived bone density measurements. We aimed to assess the predictive power of heel QUS for fractures using a meta-analysis approach.Methods We conducted an inverse variance random effects meta-analysis of prospective studies with heel QUS measures at baseline and fracture outcomes in their follow-up. Relative risks (RR) per standard deviation (SD) of different QUS parameters (broadband ultrasound attenuation [BUA], speed of sound [SOS], stiffness index [SI], and quantitative ultrasound index [QUI]) for various fracture outcomes (hip, vertebral, any clinical, any osteoporotic and major osteoporotic fractures) were reported based on study questions.Results Twenty-one studies including 55,164 women and 13,742 men were included in the meta-analysis with a total follow-up of 279,124 person-years. All four QUS parameters were associated with risk of different fracture. For instance, RR of hip fracture for 1 SD decrease of BUA was 1.69 (95% CI 1.43-2.00), SOS was 1.96 (95% CI 1.64-2.34), SI was 2.26 (95%CI 1.71-2.99) and QUI was 1.99 (95% CI 1.49-2.67). There was marked heterogeneity among studies on hip and any clinical fractures but no evidence of publication bias amongst them. Validated devices from different manufacturers predicted fracture risks with similar performance (meta-regression p values > 0.05 for difference of devices). QUS measures predicted fracture with a similar performance in men and women. Meta-analysis of studies with QUS measures adjusted for hip BMD showed a significant and independent association with fracture risk (RR/SD for BUA = 1.34 [95%CI 1.22-1.49]).Conclusions This study confirms that heel QUS, using validated devices, predicts risk of different fracture outcomes in elderly men and women. Further research is needed for more widespread utilisation of the heel QUS in clinical settings across the world.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
We prove two-sided inequalities between the integral moduli of smoothness of a function on R d[superscript] / T d[superscript] and the weighted tail-type integrals of its Fourier transform/series. Sharpness of obtained results in particular is given by the equivalence results for functions satisfying certain regular conditions. Applications include a quantitative form of the Riemann-Lebesgue lemma as well as several other questions in approximation theory and the theory of function spaces.
Resumo:
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.