965 resultados para predictions
Resumo:
The use of immobilised TiO2 for the purification of polluted water streams introduces the necessity to evaluate the effect of mechanisms such as the transport of pollutants from the bulk of the liquid to the catalyst surface and the transport phenomena inside the porous film. Experimental results of the effects of film thickness on the observed reaction rate for both liquid-side and support-side illumination are here compared with the predictions of a one-dimensional mathematical model of the porous photocatalytic slab. Good agreement was observed between the experimentally obtained photodegradation of phenol and its by-products, and the corresponding model predictions. The results have confirmed that an optimal catalyst thickness exists and, for the films employed here, is 5 μm. Furthermore, the modelling results have highlighted the fact that porosity, together with the intrinsic reaction kinetics are the parameters controlling the photocatalytic activity of the film. The former by influencing transport phenomena and light absorption characteristics, the latter by naturally dictating the rate of reaction.
Resumo:
The University of Queensland UltraCommuter concept is an ultra- light, low-drag, hybrid-electric sports coupe designed to minimize energy consumption and environmental impact while enhancing the performance, styling, features and convenience that motorists enjoy. This paper presents a detailed simulation study of the vehicle's performance and fuel economy using ADVISOR, including a detailed description of the component models and parameters assumed. Results from the study include predictions of a 0-100 kph acceleration time of ≺9s, and top speed of 170 kph, an electrical energy consumption of ≺67 Wh/km in ZEV mode and a petrol-equivalent fuel consumption of ≺2.5 L/100 km in charge-sustaining HEV mode. Overall, the results of the ADVISOR modelling confirm the UltraCommuter's potential to achieve high performance with high efficiency, and the authors look forward to a confirmation of these estimates following completion of the vehicle.
Resumo:
The ability of the technique of large-amplitude Fourier transformed (FT) ac voltammetry to facilitate the quantitative evaluation of electrode processes involving electron transfer and catalytically coupled chemical reactions has been evaluated. Predictions derived on the basis of detailed simulations imply that the rate of electron transfer is crucial, as confirmed by studies on the ferrocenemethanol (FcMeOH)-mediated electrocatalytic oxidation of ascorbic acid. Thus, at glassy carbon, gold, and boron-doped diamond electrodes, the introduction of the coupled electrocatalytic reaction, while producing significantly enhanced dc currents, does not affect the ac harmonics. This outcome is as expected if the FcMeOH (0/+) process remains fully reversible in the presence of ascorbic acid. In contrast, the ac harmonic components available from FT-ac voltammetry are predicted to be highly sensitive to the homogeneous kinetics when an electrocatalytic reaction is coupled to a quasi-reversible electron-transfer process. The required quasi-reversible scenario is available at an indium tin oxide electrode. Consequently, reversible potential, heterogeneous charge-transfer rate constant, and charge-transfer coefficient values of 0.19 V vs Ag/AgCl, 0.006 cm s (-1) and 0.55, respectively, along with a second-order homogeneous chemical rate constant of 2500 M (-1) s (-1) for the rate-determining step in the catalytic reaction were determined by comparison of simulated responses and experimental voltammograms derived from the dc and first to fourth ac harmonic components generated at an indium tin oxide electrode. The theoretical concepts derived for large-amplitude FT ac voltammetry are believed to be applicable to a wide range of important solution-based mediated electrocatalytic reactions.
Resumo:
This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.
Resumo:
Maternally inherited diabetes and deafness (MIDD) is an autosomal dominant inherited syndrome caused by the mitochondrial DNA (mtDNA) nucleotide mutation A3243G. It affects various organs including the eye with external ophthalmoparesis, ptosis, and bilateral macular pattern dystrophy.1, 2 The prevalence of retinal involvement in MIDD is high, with 50% to 85% of patients exhibiting some macular changes.1 Those changes, however, can vary between patients and within families dramatically based on the percentage of retinal mtDNA mutations, making it difficult to give predictions on an individual’s visual prognosis...
Resumo:
Diagnosis threat is a psychosocial factor that has been proposed to contribute to poor outcomes following mild traumatic brain injury (mTBI). This threat is thought to impair the cognitive test performance of individuals with mTBI because of negative injury stereotypes. University students (N= 45, 62.2% female) with a history of mTBI were randomly allocated to a diagnosis threat (DT, n=15), reduced threat (DT-reduced, n=15) or neutral (n=15) group. The reduced threat condition invoked a positive stereotype (i.e., that people with mTBI can perform well on cognitive tests). All participants were given neutral instructions before they completed baseline tests of: a) objective cognitive function across a number of domains; b) psychological symptoms; and, c) PCS symptoms, including self-reported cognitive and emotional difficulties. Participants then received either neutral, DT or DT-reduced instructions, before repeating the tests. Results were analyzed using separate mixed model ANOVAs; one for each dependent measure. The only significant result was for the 2 X 3 ANOVA on an objective test of attention/working memory, Digit Span, p<.05, such that the DT-reduced group performed better than the other groups, which were not different from each other. Although not consistent with predictions or earlier DT studies, the absence of group differences on most tests fits with several recent DT findings. The results of this study suggest that it is timely to reconsider the role of DT as a unique contributor to poor mTBI outcome.
Resumo:
Natural disasters can have adverse effect on human lives. To raise the awareness of research and better combat future events, it is important to identify recent research trends in the area of post disaster reconstruction (PDR). The authors used a three-round literature review strategy to study journal papers published in the last decade that are related to PDR with specific conditions using the Scopus search engine. A wide range of PDR related papers from a general perspective was examined in the first two rounds while the final round established 88 papers as target publications through visual examination of the abstracts, keywords and as necessary, main texts. These papers were analysed in terms of research origins, active researchers, research organisations, most cited papers, regional concerns, major themes and deliverables, for clues of the past trends and future directions. The need for appropriate PDR research is increasingly recognised. The publication number multiplied 5 times from 2002 to 2012. For PDR research with a construction perspective, the increase is sixfold. Developing countries such as those in Asia attract almost 50% researchers' attention for regional concerns while the US is the single most concentrated (24%) country. Africa is hardly represented. Researchers in developed countries lead in worldwide PDR research. This contrasts to the need for expertise in developing countries. Past works focused on waste management, stakeholder analysis, resourcing, infrastructure issue, resilience and vulnerability, reconstruction approach, sustainable reconstruction and governance issues. Future research should respond to resourcing, integrated development, sustainability and resilience building to cover the gaps. By means of a holistic summary and structured analysis of key patterns, the authors hope to provide a streamlined access to existing research findings and make predictions of future trends. They also hope to encourage a more holistic approach to PDR research and international collaborations.
Resumo:
Most mathematical models of collective cell spreading make the standard assumption that the cell diffusivity and cell proliferation rate are constants that do not vary across the cell population. Here we present a combined experimental and mathematical modeling study which aims to investigate how differences in the cell diffusivity and cell proliferation rate amongst a population of cells can impact the collective behavior of the population. We present data from a three–dimensional transwell migration assay which suggests that the cell diffusivity of some groups of cells within the population can be as much as three times higher than the cell diffusivity of other groups of cells within the population. Using this information, we explore the consequences of explicitly representing this variability in a mathematical model of a scratch assay where we treat the total population of cells as two, possibly distinct, subpopulations. Our results show that when we make the standard assumption that all cells within the population behave identically we observe the formation of moving fronts of cells where both subpopulations are well–mixed and indistinguishable. In contrast, when we consider the same system where the two subpopulations are distinct, we observe a very different outcome where the spreading population becomes spatially organized with the more motile subpopulation dominating at the leading edge while the less motile subpopulation is practically absent from the leading edge. These modeling predictions are consistent with previous experimental observations and suggest that standard mathematical approaches, where we treat the cell diffusivity and cell proliferation rate as constants, might not be appropriate.
Resumo:
Receptor tyrosine kinases (RTKs) and their downstream signalling pathways have long been hypothesized to play key roles in melanoma development. A decade ago, evidence was derived largely from animal models, RTK expression studies and detection of activated RAS isoforms in a small fraction of melanomas. Predictions that overexpression of specific RTKs implied increased kinase activity and that some RTKs would show activating mutations in melanoma were largely untested. However, technological advances including rapid gene sequencing, siRNA methods and phospho-RTK arrays now give a more complete picture. Mutated forms of RTK genes including KIT, ERBB4, the EPH and FGFR families and others are known in melanoma. Additional over- or underexpressed RTKs and also protein tyrosine phosphatases (PTPs) have been reported, and activities measured. Complex interactions between RTKs and PTPs are implicated in the abnormal signalling driving aberrant growth and survival in malignant melanocytes, and indeed in normal melanocytic signalling including the response to ultraviolet radiation. Kinases are considered druggable targets, so characterization of global RTK activity in melanoma should assist the rational development of tyrosine kinase inhibitors for clinical use. © 2011 John Wiley & Sons A/S.
Resumo:
In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sizedtrucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.
Resumo:
Occupational stress research has consistently demonstrated many negative effects of work stressors on employee adjustment (i.e., job-related attitudes and health). Considerable literature also describes potential moderators of this relationship. While research has revealed that different workplace identifications can have significant positive effects on employee adjustment, it has neglected to investigate their potential stress-buffering effects. Based on identity theories, it was predicted that stress-buffering effects of different types of identifications (distal versus proximal) would be revealed when the identification type and employee adjustment outcome type (distal versus proximal) were congruent. Predictions were tested with an employee sample from five human service nonprofit organizations (N = 337). Hierarchical multiple regression analyses revealed that main and moderated effects relating to identification supported the notion that occupational stress would be reduced when there was congruence of distal and proximal identifications and distal and proximal outcome types. However, stress-buffering effects were also found for high identifiers and low identifiers that were not in line with hypotheses posing questions for the definitions of distal and proximal identifications. Findings are discussed in terms of theoretical and practical implications.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.
Resumo:
This paper presents a comparative study on the response of a buried tunnel to surface blast using the arbitrary Lagrangian-Eulerian (ALE) and smooth particle hydrodynamics (SPH) techniques. Since explosive tests with real physical models are extremely risky and expensive, the results of a centrifuge test were used to validate the numerical techniques. The numerical study shows that the ALE predictions were faster and closer to the experimental results than those from the SPH simulations which over predicted the strains. The findings of this research demonstrate the superiority of the ALE modelling techniques for the present study. They also provide a comprehensive understanding of the preferred ALE modelling techniques which can be used to investigate the surface blast response of underground tunnels.
Resumo:
BACKGROUND: The objective of this study was to determine whether it is possible to predict driving safety in individuals with homonymous hemianopia or quadrantanopia based upon a clinical review of neuro-images that are routinely available in clinical practice. METHODS: Two experienced neuro-ophthalmologists viewed a summary report of the CT/MRI scans of 16 participants with homonymous hemianopic or quadrantanopic field defects which provided information regarding the site and extent of the lesion and made predictions regarding whether they would be safe/unsafe to drive. Driving safety was defined using two independent measures: (1) The potential for safe driving was defined based upon whether the participant was rated as having the potential for safe driving, determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist conducted just prior and (2) state recorded motor vehicle crashes (all crashes and at-fault). Driving safety was independently defined at the time of the study by state recorded motor vehicle crashes (all crashes and at-fault) recorded over the previous 5 years, as well as whether the participant was rated as having the potential for safe driving, determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist. RESULTS: The ability to predict driving safety was highly variable regardless of the driving outcome measure, ranging from 31% to 63% (kappa levels ranged from -0.29 to 0.04). The level of agreement between the neuro-ophthalmologists was also only fair (kappa =0.28). CONCLUSIONS: The findings suggest that clinical evaluation of summary reports currently available neuro-images by neuro-ophthalmologists is not predictive of driving safety. Future research should be directed at identifying and/or developing alternative tests or strategies to better enable clinicians to make these predictions.