849 resultados para Propagation prediction models
Resumo:
Early diagnosis of melanoma leads to the best prognosis for patients and may be more likely achieved when those who are at high risk for melanoma undergo regular and systematic monitoring. However, many people rarely or never see a dermatologist. Risk prediction models (recently reviewed by Usher-Smith et al ) could assist to triage people into preventive care appropriate for their risk profile. Most risk prediction models contain measures of phenotype including skin, eye and hair colour as well as genetic mutations. Almost all also contain the number and size of naevi, as well as the presence of naevi with atypical features which are independently associated with melanoma risk. In the absence of formal population-based screening programs for melanoma in most countries worldwide, people with high risk phenotypes may need to consider regular monitoring or self-monitoring of their naevi , especially since the vast majority of melanomas are found by people themselves or their friend and relatives. Another group of patients that will require regular monitoring are patients who have been successfully treated for their first melanoma, whose risk to develop a second melanoma is greatly increased . In a US study of 89,515 melanoma survivors those with a previous diagnosis of melanoma had a 9-fold increased risk of developing subsequent melanoma compared with the general population, equating to a rate of 3.76 per 1000 person-years, while in an Australian study, risk of subsequent melanoma was 6 per 1000 person-years. Regular follow-up is therefore essential for melanoma survivors, especially during the first few years after initial melanoma diagnosis.
Resumo:
Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.
Resumo:
- Introduction There is limited understanding of how young adults’ driving behaviour varies according to long-term substance involvement. It is possible that regular users of amphetamine-type stimulants (i.e. ecstasy (MDMA) and methamphetamine) may have a greater predisposition to engage in drink/drug driving compared to non-users. We compare offence rates, and self-reported drink/drug driving rates, for stimulant users and non-users in Queensland, and examine contributing factors. - Methods The Natural History Study of Drug Use is a prospective longitudinal study using population screening to recruit a probabilistic sample of amphetamine-type stimulant users and non-users aged 19-23 years. At the 4 ½ year follow-up, consent was obtained to extract data from participants’ Queensland driver records (ATS users: n=217, non-users: n=135). Prediction models were developed of offence rates in stimulant users controlling for factors such as aggression and delinquency. - Results Stimulant users were more likely than non-users to have had a drink-driving offence (8.7% vs. 0.8%, p < 0.001). Further, about 26% of ATS users and 14% of non-users self-reported driving under the influence of alcohol during the last 12 months. Among stimulant users, drink-driving was independently associated with last month high-volume alcohol consumption (Incident Rate Ratio (IRR): 5.70, 95% CI: 2.24-14.52), depression (IRR: 1.28, 95% CI: 1.07-1.52), low income (IRR: 3.57, 95% CI: 1.12-11.38), and male gender (IRR: 5.40, 95% CI: 2.05-14.21). - Conclusions Amphetamine-type stimulant use is associated with increased long-term risk of drink-driving, due to a number of behavioural and social factors. Inter-sectoral approaches which target long-term behaviours may reduce offending rates.
Resumo:
Reliability of supply of feed grain has become a high priority issue for industry in the northern region. Expansion by major intensive livestock and industrial users of grain, combined with high inter-annual variability in seasonal conditions, has generated concern in the industry about reliability of supply. This paper reports on a modelling study undertaken to analyse the reliability of supply of feed grain in the northern region. Feed grain demand was calculated for major industries (cattle feedlots, pigs, poultry, dairy) based on their current size and rate of grain usage. Current demand was estimated to be 2.8Mt. With the development of new industrial users (ethanol) and by projecting the current growth rate of the various intensive livestock industries, it was estimated that demand would grow to 3.6Mt in three years time. Feed grain supply was estimated using shire scale yield prediction models for wheat and sorghum that had been calibrated against recent ABS production data. Other crops that contribute to a lesser extent to the total feed grain pool (barley, maize) were included by considering their production relative to the major winter and summer grains, with estimates based on available production records. This modelling approach allowed simulation of a 101-year time series of yield that showed the extent of the impact of inter-annual climate variability on yield levels. Production estimates were developed from this yield time series by including planted crop area. Area planted data were obtained from ABS and ABARE records. Total production amounts were adjusted to allow for any export and end uses that were not feed grain (flour, malt etc). The median feed grain supply for an average area planted was about 3.1Mt, but this varied greatly from year to year depending on seasonal conditions and area planted. These estimates indicated that supply would not meet current demand in about 30% of years if a median area crop were planted. Two thirds of the years with a supply shortfall were El Nino years. This proportion of years was halved (i.e. 15%) if the area planted increased to that associated with the best 10% of years. Should demand grow as projected in this study, there would be few years where it could be met if a median crop area was planted. With area planted similar to the best 10% of years, there would still be a shortfall in nearly 50% of all years (and 80% of El Nino years). The implications of these results on supply/demand and risk management and investment in research and development are briefly discussed.
Resumo:
Site index prediction models are an important aid for forest management and planning activities. This paper introduces a multiple regression model for spatially mapping and comparing site indices for two Pinus species (Pinus elliottii Engelm. and Queensland hybrid, a P. elliottii x Pinus caribaea Morelet hybrid) based on independent variables derived from two major sources: g-ray spectrometry (potassium (K), thorium (Th), and uranium (U)) and a digital elevation model (elevation, slope, curvature, hillshade, flow accumulation, and distance to streams). In addition, interpolated rainfall was tested. Species were coded as a dichotomous dummy variable; interaction effects between species and the g-ray spectrometric and geomorphologic variables were considered. The model explained up to 60% of the variance of site index and the standard error of estimate was 1.9 m. Uranium, elevation, distance to streams, thorium, and flow accumulation significantly correlate to the spatial variation of the site index of both species, and hillshade, curvature, elevation and slope accounted for the extra variability of one species over the other. The predicted site indices varied between 20.0 and 27.3 m for P. elliottii, and between 23.1 and 33.1 m for Queensland hybrid; the advantage of Queensland hybrid over P. elliottii ranged from 1.8 to 6.8 m, with the mean at 4.0 m. This compartment-based prediction and comparison study provides not only an overview of forest productivity of the whole plantation area studied but also a management tool at compartment scale.
Resumo:
Objective To identify measures that most closely relate to hydration in healthy Brahman-cross neonatal calves that experience milk deprivation. Methods In a dry tropical environment, eight neonatal Brahman-cross calves were prevented from suckling for 2–3 days during which measurements were performed twice daily. Results Mean body water, as estimated by the mean urea space, was 74 ± 3% of body weight at full hydration. The mean decrease in hydration was 7.3 ± 1.1% per day. The rate of decrease was more than three-fold higher during the day than at night. At an ambient temperature of 39°C, the decrease in hydration averaged 1.1% hourly. Measures that were most useful in predicting the degree of hydration in both simple and multiple-regression prediction models were body weight, hindleg length, girth, ambient and oral temperatures, eyelid tenting, alertness score and plasma sodium. These parameters are different to those recommended for assessing calves with diarrhoea. Single-measure predictions had a standard error of at least 5%, which reduced to 3–4% if multiple measures were used. Conclusion We conclude that simple assessment of non-suckling Brahman-cross neonatal calves can estimate the severity of dehydration, but the estimates are imprecise. Dehydration in healthy neonatal calves that do not have access to milk can exceed 20% (>15% weight loss) in 1–3 days under tropical conditions and at this point some are unable to recover without clinical intervention.
Resumo:
Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.
Resumo:
In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.
Resumo:
In this article, a single-phase, one-domain macroscopic model is developed for studying binary alloy solidification with moving equiaxed solid phase, along with the associated transport phenomena. In this model, issues such as thermosolutal convection, motion of solid phase relative to liquid and viscosity variations of the solid-liquid mixture with solid fraction in the mobile zone are taken into account. Using the model, the associated transport phenomena during solidification of Al-Cu alloys in a rectangular cavity are predicted. The results for temperature variation, segregation patterns, and eutectic fraction distribution are compared with data from in-house experiments. The model predictions compare well with the experimental results. To highlight the influence of solid phase movement on convection and final macrosegregation, the results of the current model are also compared with those obtained from the conventional solidification model with stationary solid phase. By including the independent movement of the solid phase into the fluid transport model, better predictions of macrosegregation, microstructure, and even shrinkage locations were obtained. Mechanical property prediction models based on microstructure will benefit from the improved accuracy of this model.
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
老化和应变速率对土体强度和变形性质的影响研究,以及野外事例观察,已经证实大多数滑坡的形成机制可以由深部蠕变理论解释.土体蠕动速率在滑坡爆发之前将随着造成滑坡因素的增加而增加.根据这一理论,人们可以建立早期滑动的唯象蠕变方程来预测滑坡.早期的这类代表模型是Saito模型和Voight模型.本文重新进行的理论分析表明:这些模型具有丰富的理论内涵和深入的理论基础,具备滑坡预报模型要求的所有条件,包含的参数物理意义明确,因而具有新的生命力.
Resumo:
[ES]La caracterización térmica de una fachada vegetal es una tarea difícil que requiere un nivel de certeza y predicción realista de modelos en situaciones exteriores dinámicas. El estudio teórico de elementos constructivos complejos no asemeja la realidad, por lo que para obtener la correcta caracterización es necesario ensayar dichos elementos y analizar los datos obtenidos. Para ello se utilizan las células de ensayo PASLINK y el entorno informático LORD. A través de ellos, se obtiene la transmitancia térmica dinámica de la fachada vegetal ensayada en condiciones exteriores reales.
Resumo:
Esta tese tem por objetivo propor uma metodologia para recuperação de perfis verticais de temperatura na atmosfera com nuvens a partir de medidas de radiância feitas por satélite, usando redes neurais artificiais. Perfis verticais de temperatura são importantes condições iniciais para modelos de previsão de tempo, e são usualmente obtidos a partir de medidas de radiâncias feitas por satélites na faixa do infravermelho. No entanto, quando estas medidas são feitas na presença de nuvens, não é possível, com as técnicas atuais, efetuar a recuperação deste perfil. É uma perda significativa de informação, pois, em média, 20% dos pixels das imagens acusam presença de nuvens. Nesta tese, este problema é resolvido como um problema inverso em dois passos: o primeiro passo consiste na determinação da radiância que atinge a base da nuvem a partir da radiância medida pelos satélites; o segundo passo consiste na determinação do perfil vertical de temperaturas a partir da informação de radiância fornecida pelo primeiro passo. São apresentadas reconstruções do perfil de temperatura para quatro casos testes. Os resultados obtidos mostram que a metodologia adotada produz resultados satisfatórios e tem grande potencial de uso, permitindo incorporar informações sobre uma região mais ampla do globo e, consequentemente, melhorar os modelos de previsão do tempo.
Resumo:
Horseshoe crabs (Limulus polyphemus) are valued by many stakeholders, including the commercial fishing industry, biomedical companies, and environmental interest groups. We designed a study to test the accuracy of the conversion factors that were used by NOAA Fisheries and state agencies to estimate horseshoe crab landings before mandatory reporting that began in 1998. Our results indicate that the NOAA Fisheries conversion factor consistently overestimates the weight of male horseshoe crabs, particularly those from New England populations. Because of the inaccuracy of this and other conversion factors, states are now mandated to report the number (not biomass) and sex of landed horseshoe crabs. However, accurate estimates of biomass are still necessary for use in prediction models that are being developed to better manage the horseshoe crab fishery. We recommend that managers use the conversion factors presented in this study to convert current landing data from numbers to biomass of harvested horseshoe crabs for future assessments.