942 resultados para Regression To The Mean
Resumo:
Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.
Resumo:
European badgers (Meles meles) are an important part of the Irish ecosystem; they are a component of Ireland’s native fauna and are afforded protection by national and international laws. The species is also a reservoir host for bovine tuberculosis (bTB) and implicated in the epidemiology of bTB in cattle. Due to this latter point, badgers have been culled in the Republic of Ireland (ROI) in areas where persistent cattle bTB outbreaks exist. The population dynamics of badgers are therefore of great pure and applied interest. The studies within this thesis used large datasets and a number of analytical approaches to uncover essential elements of badger populations in the ROI. Furthermore, a review and meta-analysis of all available data on Irish badgers was completed to give a framework from which key knowledge gaps and future directions could be identified (Chapter 1). One main finding suggested that badger densities are significantly reduced in areas of repeated culling, as revealed through declining trends in signs of activity (Chapter 2) and capture numbers (Chapter 2 and Chapter 3). Despite this, the trappability of badgers was shown to be lower than previously thought. This indicates that management programmes would require repeated long-term efforts to be effective (Chapter 4). Mark-recapture modelling of a population (sample area: 755km2) suggested that mean badger density was typical of continental European populations, but substantially lower than British populations (Chapter 4). Badger movement patterns indicated that most of the population exhibited site fidelity. Long-distance movements were also recorded, the longest of which (20.1km) was the greatest displacement of an Irish badger currently known (Chapter 5). The studies presented in this thesis allows for the development of more robust models of the badger population at national scales (see Future Directions). Through the use of large-scale datasets future models will facilitate informed sustainable planning for disease control.
Resumo:
Defects in commercial cheese result in a downgrading of the final cheese and a consequential economic loss to the cheese producer. Developments of defects in cheese are often not fully understood and therefore not controllable by the producer. This research investigated the underlying factors in the development of split and secondary fermentation defect and of pinking defects in commercial Irish cheeses. Split defect in Swiss-type cheese is a common defect associated with eye formation and manifests as slits and cracks visible in the cut cheese loaf (Reinbold, 1972; Daly et al., 2010). No consensus exists as to the definitive causes of the defect and possible factors which may contribute to the defect were reviewed. Models were derived to describe the relationship between moisture, pH, and salt levels and the distance from sample location to the closest external block surface during cheese ripening. Significant gradients within the cheese blocks were observed for all measured parameters in cheeses at 7 day post/after manufacture. No significant pH gradient was found within the blocks on exit from hot-room ripening and at three months post exit from the hot-room. Moisture content reached equilibrium within the blocks between exit from hot-room and 3 months after exit from hot-room while salt and salt-to-moisture levels had not reached equilibrium within the cheese blocks even at three months after exit from hot-room ripening. A characterisation of Swiss-type cheeses produced from a seasonal milk supply was undertaken. Cheeses were sampled on two days per month of the production year, at three different times during the manufacturing day, at internal and external regions of the cheese block and at four ripening time points (7 days post manufacture, post hot-room, 14 days post hot-room and 3 months in a cold room after exit from hot-room). Compositional, biochemical and microbial indices were determined, and the results were analysed as a splitplot with a factorial arrangement of treatments (season, time of day, area) on the main plot and ripening time on the sub-plot. Season (and interactions) had a significant effect on pH and salt-in-moisture levels (SM), mean viable counts of L. helveticus, propionic acid and non-starter lactic acid bacteria, levels of primary and secondary proteolysis and cheese firmness. Levels of proteolysis increased significantly during hot-room ripening but also during cold room storage, signifying continued development of cheese ripening during cold storage (> 8°C). Rheological parameters (e.g. springiness and cohesiveness) were significantly affected by interactions between ripening and location within cheese blocks. Time of day of manufacture significantly affected mean cheese calcium levels at 7 days post manufacture and mean levels of arginine and mean viable counts of NSLAB. Cheeses produced during the middle of the production day had the best grading scores and were more consistent compared to cheeses produced early or late during day of manufacture. Cheeses with low levels of S/M and low values of resilience were associated with poor grades at 7 days post manufacture. Chesses which had high elastic index values and low values of springiness in the external areas after exit from hot-room ripening also obtained good commercial grades. Development of a pink colour defect is an intermittent defect in ripened cheese which may or may not contain an added colourant, e.g., annatto. Factors associated with the defect were reviewed. Attempts at extraction and identification of the pink discolouration were unsuccessful. The pink colour partitioned with the water insoluble protein fraction. No significant difference was observed between ripened control and defect cheese for oxygen levels and redox potential or for the results of elemental analysis. A possible relationship between starter activity and defect development was established in cheeses with added coulourant, as lower levels of residual galactose and lactose were observed in defective cheese compared to control cheese free of the defect. Swiss-type cheese without added colourant had significantly higher levels of arginine and significantly lower lactate levels. Flow cell cytometry indicated that levels of bacterial cell viability and metabolic state differed between control and defect cheeses (without added colourant). Pyrosequencing analysis of cheese samples with and without the defect detected the previously unreported bacteria in cheese, Deinococcus thermus (a potential carotenoid producer). Defective Swiss-type cheeses had elevated levels of Deinococcus thermus compared to control cheeses, however the direct cause of pink was not linked to this bacterium alone. Overall, research was undertaken on underlying factors associated with the development of specific defects in commercial cheese, but also characterised the dynamic changes in key microbial and physicochemical parameters during cheese ripening and storage. This will enable the development of processing technologies to enable seasonal manipulation of manufacture protocols to minimise compositional and biochemical variability and to reduce and inhibit the occurrence of specific quality defects.
Resumo:
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.
Resumo:
Ionic liquids (ILs) have attracted large amount of interest due to their unique properties. Although large effort has been focused on the investigation of their potential application, characterization of ILs properties and structure–property relationships of ILs are poorly understood. Computer aided molecular design (CAMD) of ionic liquids (ILs) can only be carried if predictive computational methods for the ILs properties are available. The limited availability of experimental data and their quality have been preventing the development of such tools. Based on experimental surface tension data collected from the literature and measured at our laboratory, it is here shown how a quantitative structure–property relationship (QSPR) correlation for parachors can be used along with an estimation method for the densities to predict the surface tensions of ILs. It is shown that a good agreement with literature data is obtained. For circa 40 ionic liquids studied a mean percent deviation (MPD) of 5.75% with a maximum deviation inferior to 16% was observed. A correlation of the surface tensions with the molecular volumes of the ILs was developed for estimation of the surface tensions at room temperature. It is shown that it can describe the experimental data available within a 4.5% deviation. The correlations here developed can thus be used to evaluate the surface tension of ILs for use in process design or in the CAMD of new ionic liquids.
Resumo:
The effects of the process variables, pH of aqueous phase, rate of addition of organic, polymeric, drug-containing phase to aqueous phase, organic:aqueous phase volume ratio and aqueous phase temperature on the entrapment of propranolol hydrochloride in ethylcellulose (N4) microspheres prepared by the solvent evaporation method were examined using a factorial design. The observed range of drug entrapment was 1.43 +/- 0.02%w/w (pH 6, 25 degrees C, phase volume ratio 1:10, fast rate of addition) to 16.63 +/- 0.92%w/w (pH 9, 33 degrees C, phase volume ratio 1:10, slow rate of addition) which corresponded to mean entrapment efficiencies of 2.86 and 33.26, respectively. Increased pH, increased temperature and decreased rate of addition significantly enhanced entrapment efficiency. However, organic:aqueous phase volume ratio did not significantly affect drug entrapment. Statistical interactions were observed between pH and rate of addition, pH and temperature, and temperature and rate of addition. The observed interactions involving pH are suggested to be due to the abilities of increased temperature and slow rate of addition to sufficiently enhance the solubility of dichloromethane in the aqueous phase, which at pH 9, but not pH 6, allows partial polymer precipitation prior to drug partitioning into the aqueous phase. The interaction between temperature and rate of addition is due to the relative lack of effect of increased temperature on drug entrapment following slow rate of addition of the organic phase. In comparison to the effects of pH on drug entrapment, the contributions of the other physical factors examined were limited.
Resumo:
Modified lipoproteins induce autoimmune responses including the synthesis of autoantibodies with pro-inflammatory characteristics. Circulating modified lipoprotein autoantibodies combine with circulating antigens and form immune complexes (IC). We now report the results of a study investigating the role of circulating IC containing modified lipoproteins in the progression of carotid intima-media thickness (IMT) in patients enrolled in the Epidemiology of Diabetes Interventions and Complications (EDIC) Trial, a follow-up study of the Diabetes Control and Complications Trial (DCCT). This cohort includes 1229 patients with type 1 diabetes in whom B-mode ultrasonography of internal and common carotid arteries was performed in 1994-1996 and in 1998-2000. Conventional CHD risk factors, antibodies against modified forms of LDL and modified lipoprotein IC were determined in 1050 of these patients from blood collected in 1996-1998. Cholesterol and apolipoprotein B content of IC (surrogate markers of modified ApoB-rich lipoproteins) were significantly higher in patients who showed progression of the internal carotid IMT than in those showing no progression, regression or mild progression. Multivariate linear and logistic regression modeling using conventional and non-conventional risk factors showed that the cholesterol content of IC was a significant positive predictor of internal carotid IMT progression. In conclusion these data demonstrate that increased levels of modified ApoB-rich IC are associated with increased progression of internal carotid IMT in the DCCT/EDIC cohort of type 1 diabetes.
Resumo:
Inconsistencies surrounding the prevalence levels of depression in later life suggest that the measurement of depression in older people may be problematic. The current study aimed to map responses to a depressive symptom scale, the Mental Health Index-5 (MHI-5) which is part of the Short form 36 (SF-36, Ware et al., 1993) against the diagnostic screening items of the Composite International Diagnostic Instrument-Short Form (CIDI-SF, Kessler et al., 1998) to examine disagreement rates across age groups. The study examined data from a national random sample of 10,641 participants living in Ireland, 58.8% were female and 19% were over 65 (SLÁN, 2007). CIDI-SF depression screening endorsement was lower in older groups, whereas mean MHI-5 depressive symptoms showed less change across age groups. Results showed that the odds of MHI-5 endorsers aged 18–44 endorsing CIDI-SF screening questions were 5 times and 4.5 times (dysphoria and anhedonia, respectively) greater than the odds of people aged 75 or more endorsing these items. Findings suggest that although the risk of depressive disorder may decrease with age, complex diagnostic screening questions may exaggerate lower rates of depression among older people.
Resumo:
The motor points of the skeletal muscles, mainly of interest to anatomists and physiologists, have recently attracted much attention from researchers in the field of functional electrical stimulation. The muscle motor point has been defined as the entry point of the motor nerve branch into the epimysium of the muscle belly. Anatomists have pointed out that many muscles in the limbs have multiple motor points. Knowledge of the location of nerve branches and terminal nerve entry points facilitates the exact insertion and the suitable selection of the number of electrodes required for each muscle for functional electrical stimulation. The present work therefore aimed to describe the number, location, and distribution of motor points in the human forearm muscles to obtain optimal hand function in many clinical situations. Twenty three adult human cadaveric forearms were dissected. The numbers of primary nerves and motor points for each muscle were tabulated. The mean numbers and the standard deviation were calculated and grouped in tables. Data analyses were performed with the use of a statistical analysis package (SPSS 13.0). The proximal third of the muscle was the usual part of the muscle that received the motor points. Most of the forearm muscles were innervated from the lateral side and deep surface of the muscle. The information in this study may also be usefully applied in selective denervation procedures to balance muscles in spastic upper limbs. Copyright © 2007 Via Medica.
Resumo:
Purpose
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions.
Methods
Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms.
Results
Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm.
Conclusion
Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Resumo:
The genus Bursaphelenchus includes B. xylophilus (Steiner et Buhrer, 1934) Nickle, 1981, which is of world economic and quarantine importance. Distinction among several species of the pinewood nematodes species complex (PWNSC) is often difficult. Besides standard morphology, morphometrics and molecular biology, new tools are welcome to better understand this group. The computerized (or e-) key of this genus, presented in this communication, includes 74 species (complete list of valid species of the world fauna) and 35 characters, that were used by the taxonomic experts of this group, in the original descriptions. Morphology of sex organs (male spicules and female vulval region) was digitized and classified to distinguish alternative types. Several qualitative characters with overlapping character states (expressions) were transformed into the morphometric indices with the discontinuous ranges (characters of ratios of the spicule dimensions). Characters and their states (expressions) were illustrated in detail and supplied by brief user-friendly comments. E-key was created in the BIKEY identification system (Dianov & Lobanov, 1996-2004). The system has built-algorithm ranging characters depending on their diagnostic values at each step of identification. Matrix of species and the character states (structural part of the e-key database) may be easily transformed using statistical packages into the dendrograms of general phenetic similarities (UPGMA, standard distance: mean character difference). It may be useful in the detailed analysis of taxonomy and evolution of the genus and in its splitting to the species groups based on morphology. The verification of the dendrogram using the information on the species links with insect vectors and their associated plants, provided an opportunity to recognize the five clusters (xylophilus, hunti, eremus sensu stricto, tusciae and piniperdae sensu stricto), which seem to be the natural species groups. The hypothesis about the origin and the first stages of the genus evolution is proposed. A general review of the genus Bursaphelenchus is presented.
Resumo:
As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.
Resumo:
The Portuguese National Statistical Institute intends to produce estimations for the mean price of the habitation transation.
Resumo:
Purpose: To know how often occur the repetitions of MRI exams and sequences in radiology departments. Methods and Materials: A self applied-questionnaire was used as instrument and assigned to 57 radiographers who performed MRI exams to determine which were the causes that lead to the repetition. The questionnaires were interpreted and statistically analyzed through descriptive statistics and Spearman’s rho correlations. Results: At a 95% confidence interval, the major results suggest that the patient’s movement during de MRI exams is the main cause to repeat this exams (mean of 3.88 on a 5 points likert scale). However, there are causes related to the radiographer’s and the results showed that the introduction of wrong imaging parameters by the performer are a major cause too (N=26). Spearman rho correlations between radiographer’s time of experience and frequency of MRI exams repetitions were poor and not significant (r=0.141; p=0.297). The correlations between radiographer’s tiredness and frequency of MRI exams repetitions were negative, weak and not significant (r= -0.151; p=0.263). Conclusion: The patients’ movement may disrupt the examination or degrade the images with artifacts. The level of experience doesn’t influence the repetitions of MRI exams, it seems that seniors radiographers don’t have improvements in performance as it should be expected. It’s recommendable to do training courses regularly to improve the performance and systematically evaluate. Several features will need to be identified which would decrease the MRI exams repetitions.
Resumo:
Pollen data have been recorded at Novi Sad in Serbia since 2000. The adopted method of producing pollen counts has been the use of five longitudinal transects that examine 19.64% of total sample surface. However, counting five transects is time consuming and so the main objective of this study is to investigate whether reducing the number to three or even two transects would have a significant effect on daily average and bi-hourly pollen concentrations, as well as the main characteristics of the pollen season and long-term trends. This study has shown that there is a loss of accuracy in daily average and bi-hourly pollen concentrations (an increase in % ERROR) as the sub-sampling area is reduced from five to three or two longitudinal transects. However, this loss of accuracy does not impact on the main characteristics of the season or long-term trends. As a result, this study can be used to justify changing the sub-sampling method used at Novi Sad from five to three longitudinal transects. The use of two longitudinal transects has been ruled out because, although quicker, the counts produced: (a) had the greatest amount of % ERROR, (b) altered the amount of influence of the independent variable on the dependent variable (the slope in regression analysis) and (c) the total sampled surface (7.86%) was less than the minimum requirement recommended by the European Aerobiology Society working group on Quality Control (at least 10% of total slide area).