776 resultados para log measuring
Resumo:
The dearth of knowledge on the load resistance mechanisms of log houses and the need for developing numerical models that are capable of simulating the actual behaviour of these structures has pushed efforts to research the relatively unexplored aspects of log house construction. The aim of the research that is presented in this paper is to build a working model of a log house that will contribute toward understanding the behaviour of these structures under seismic loading. The paper presents the results of a series of shaking table tests conducted on a log house and goes on to develop a numerical model of the tested house. The finite element model has been created in SAP2000 and validated against the experimental results. The modelling assumptions and the difficulties involved in the process have been described and, finally, a discussion on the effects of the variation of different physical and material parameters on the results yielded by the model has been drawn up.
Resumo:
One of the major challenges in the development of an immersive system is handling the delay between the tracking of the user’s head position and the updated projection of a 3D image or auralised sound, also called end-to-end delay. Excessive end-to-end delay can result in the general decrement of the “feeling of presence”, the occurrence of motion sickness and poor performance in perception-action tasks. These latencies must be known in order to provide insights on the technological (hardware/software optimization) or psychophysical (recalibration sessions) strategies to deal with them. Our goal was to develop a new measurement method of end-to-end delay that is both precise and easily replicated. We used a Head and Torso simulator (HATS) as an auditory signal sensor, a fast response photo-sensor to detect a visual stimulus response from a Motion Capture System, and a voltage input trigger as real-time event. The HATS was mounted in a turntable which allowed us to precisely change the 3D sound relative to the head position. When the virtual sound source was at 90º azimuth, the correspondent HRTF would set all the intensity values to zero, at the same time a trigger would register the real-time event of turning the HATS 90º azimuth. Furthermore, with the HATS turned 90º to the left, the motion capture marker visualization would fell exactly in the photo-sensor receptor. This method allowed us to precisely measure the delay from tracking to displaying. Moreover, our results show that the method of tracking, its tracking frequency, and the rendering of the sound reflections are the main predictors of end-to-end delay.
Resumo:
Purpose – The purpose of this paper is to develop a subjective multidimensional measure of early career success during university-to-work transition. Design/methodology/approach – The construct of university-to-work success (UWS) was defined in terms of intrinsic and extrinsic career outcomes, and a three-stage study was conducted to create a new scale. Findings – A preliminary set of items was developed and tested by judges. Results showed the items had good content validity. Factor analyses indicated a four-factor structure and a second-order model with subscales to assess: career insertion and satisfaction, confidence in career future, income and financial independence, and adaptation to work. Third, the authors sought to confirm the hypothesized model examining the comparative fit of the scale and two alternative models. Results showed that fits for both the first- and second-order models were acceptable. Research limitations/implications – The proposed model has sound psychometric qualities, although the validated version of the scale was not able to incorporate all constructs envisaged by the initial theoretical model. Results indicated some direction for further refinement. Practical implications – The scale could be used as a tool for self-assessment or as an outcome measure to assess the efficacy of university-to-work programs in applied settings. Originality/value – This study provides a useful single measure to assess early career success during the university-to-work transition, and might facilitate testing of causal models which could help identify factors relevant for successful transition.
Resumo:
A transformação logarítmica das relações bivariadas no cálculo das normas e dos índices do sistema integrado de diagnose e recomendação de nutrientes (DRIS) tem sido sugerida como uma forma de melhorar a acurácia do sistema, principalmente por diminuir a inconsistência na distribuição de freqüência entre as formas de expressão direta e inversa de uma mesma relação. Neste sentido, o objetivo deste trabalho foi avaliar o uso de relações log-transformadas entre diferentes populações de referência. Amostras foliares de cupuaçu foram coletadas de 153 pomares comerciais, cuja idade das plantas variou de 5 a 18 anos, cultivados em monocultivo ou sistemas agroflorestais, obtendo-se para cada relação nutricional entre os nutrientes N, P, K, Ca, Mg, Fe, Cu, Zn, e Mn as normas DRIS bivariadas log-transformadas e não transformadas, obtidas para o conjunto da população e para condições específicas. Os resultados mostraram que as relações log-transformadas contribuem para uma maior consistência dos resultados entre as formas direta e inversa entre diferentes normas DRIS.
Resumo:
OBJECTIVE: To evaluate the performance of the turbidimetric method of C-reactive protein (CRP) as a measure of low-grade inflammation in patients admitted with non-ST elevation acute coronary syndromes (ACS). METHODS: Serum samples obtained at hospital arrival from 68 patients (66±11 years, 40 men), admitted with unstable angina or non-ST elevation acute myocardial infarction were used to measure CRP by the methods of nephelometry and turbidimetry. RESULTS: The medians of C-reactive protein by the turbidimetric and nephelometric methods were 0.5 mg/dL and 0.47 mg/dL, respectively. A strong linear association existed between the 2 methods, according to the regression coefficient (b=0.75; 95% C.I.=0.70-0.80) and correlation coefficient (r=0.96; P<0.001). The mean difference between the nephelometric and turbidimetric CRP was 0.02 ± 0.91 mg/dL, and 100% agreement between the methods in the detection of high CRP was observed. CONCLUSION: In patients with non-ST elevation ACS, CRP values obtained by turbidimetry show a strong linear association with the method of nephelometry and perfect agreement in the detection of high CRP.
Resumo:
Mestrado em Finanças
Resumo:
Fundamento: Em pacientes com hipertensão arterial sistêmica, a microalbuminúria é um marcador de lesão endotelial e está associada a um risco aumentado de doença cardiovascular. Objetivo: O objetivo do presente estudo foi determinar os fatores que influenciam a ocorrência de microalbumiúria em pacientes hipertensos com creatinina sérica menor que 1,5 mg/dL. Métodos: Foram incluídos no estudo 133 pacientes brasileiros atendidos em um ambulatório multidisciplinar para hipertensos. Pacientes com creatinina sérica maior do que 1,5 mg/dL e aqueles com diabete mellitus foram excluídos do estudo. A pressão arterial sistólica e diastólica foi aferida. O índice de massa corporal (IMC) e a taxa de filtração glomerular estimada pela fórmula CKD-EPI foram calculados. Em um estudo transversal, creatinina, cistatina C, colesterol total, HDL colesterol, LDL colesterol, triglicerídeos, proteína C-reativa (PCR) e glicose foram mensurados em amostra de sangue. A microalbuminúria foi determinada na urina colhida em 24 horas. Os hipertensos foram classificados pela presença de um ou mais critérios para síndrome metabólica. Resultados: Em análise de regressão múltipla, os níveis séricos de cistatina C, PCR, o índice aterogênico log TG/HDLc e a presença de três ou mais critérios para síndrome metabólica foram positivamente correlacionados com a microalbuminuria (r2: 0,277; p < 0,05). Conclusão: Cistatina C, PCR, log TG/HDLc e presença de três ou mais critérios para síndrome metabólica, independentemente da creatinina sérica, foram associados com a microalbuminúria, um marcador precoce de lesão renal e de risco cardiovascular em pacientes com hipertensão arterial essencial.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
This paper proposes a two-dimensional Strategic Performance Measure (SPM) to evaluate the achievement of sustained superior performance. This proposal builds primarily on the fact that, under the strategic management perspective, a firm's prevalent objective is the pursuit of sustained superior performance. Three basic conceptual dimensions stem from this objective: relativity, sign dependence, and dynamism. These are the foundations of the SPM, which carries out a separate evaluation of the attained superior performance and of its sustainability over time. In contrast to existing measures of performance, the SPM provides: (i) a dynamic approach by considering the progress or regress in performance over time, and (ii) a cardinal measurement of performance differences and its changes over time. The paper also proposes an axiomatic framework that a measure of strategic performance should comply with to be theoretically and managerially sound. Finally, an empirical illustration of the Spanish banking sector during 1987-1999 is herein provided by discussing some relevant cases.
Resumo:
In a series of papers (Tang, Chin and Rao, 2008; and Tang, Petrie and Rao 2006 & 2007), we have tried to improve on a mortality-based health status indicator, namely age-at-death (AAD), and its associated health inequality indicators that measure the distribution of AAD. The main contribution of these papers is to propose a frontier method to separate avoidable and unavoidable mortality risks. This has facilitated the development of a new indicator of health status, namely the Realization of Potential Life Years (RePLY). The RePLY measure is based on the concept of a “frontier country” that, by construction, has the lowest mortality risks for each age-sex group amongst all countries. The mortality rates of the frontier country are used as a proxy for the unavoidable mortality rates, and the residual between the observed mortality rates and the unavoidable mortality rates are considered as avoidable morality rates. In this approach, however, countries at different levels of development are benchmarked against the same frontier country without considering their heterogeneity. The main objective of the current paper is to control for national resources in estimating (conditional) unavoidable and avoidable mortality risks for individual countries. This allows us to construct a new indicator of health status – Realization of Conditional Potential Life Years (RCPLY). The paper presents empirical results from a dataset of life tables for 167 countries from the year 2000, compiled and updated by the World Health Organization. Measures of national average health status and health inequality based on RePLY and RCPLY are presented and compared.
Resumo:
Cecchetti et al. (2006) develop a method for allocating macroeconomic performance changes among the structure of the economy, variability of supply shocks and monetary policy. We propose a dual approach of their method by borrowing well-known tools from production theory, namely the Farrell measure and the Malmquist index. Following FÄare et al (1994) we propose a decomposition of the efficiency of monetary policy. It is shown that the global efficiency changes can be rewritten as the product of the changes in macroeconomic performance, minimum quadratic loss, and efficiency frontier.
Resumo:
This paper examines both the in-sample and out-of-sample performance of three monetary fundamental models of exchange rates and compares their out-of-sample performance to that of a simple Random Walk model. Using a data-set consisting of five currencies at monthly frequency over the period January 1980 to December 2009 and a battery of newly developed performance measures, the paper shows that monetary models do better (in-sample and out-of-sample forecasting) than a simple Random Walk model.
Resumo:
Traditionally, it is assumed that the population size of cities in a country follows a Pareto distribution. This assumption is typically supported by nding evidence of Zipf's Law. Recent studies question this nding, highlighting that, while the Pareto distribution may t reasonably well when the data is truncated at the upper tail, i.e. for the largest cities of a country, the log-normal distribution may apply when all cities are considered. Moreover, conclusions may be sensitive to the choice of a particular truncation threshold, a yet overlooked issue in the literature. In this paper, then, we reassess the city size distribution in relation to its sensitivity to the choice of truncation point. In particular, we look at US Census data and apply a recursive-truncation approach to estimate Zipf's Law and a non-parametric alternative test where we consider each possible truncation point of the distribution of all cities. Results con rm the sensitivity of results to the truncation point. Moreover, repeating the analysis over simulated data con rms the di culty of distinguishing a Pareto tail from the tail of a log-normal and, in turn, identifying the city size distribution as a false or a weak Pareto law.
Resumo:
This paper employs an unobserved component model that incorporates a set of economic fundamentals to obtain the Euro-Dollar permanent equilibrium exchange rates (PEER) for the period 1975Q1 to 2008Q4. The results show that for most of the sample period, the Euro-Dollar exchange rate closely followed the values implied by the PEER. The only significant deviations from the PEER occurred in the years immediately before and after the introduction of the single European currency. The forecasting exercise shows that incorporating economic fundamentals provides a better long-run exchange rate forecasting performance than a random walk process.