827 resultados para data-based reporting
Resumo:
The increase in the number of financial restatements in recent years has resulted in a significant decrease in the amount of market capitalization for restated companies. Prior literature does not differentiate between single and multiple restatements announcements. This research investigates the inter-relationships among multiple financial restatements, corporate governance, market microstructure and the firm's rate of return in the form of three essays by differentiating between single and multiple restatement announcement companies. First essay examines the stock performance of companies announcing the financial restatement multiple times. The postulation is that prior research overestimates the abnormal return by not separating single restatement companies from multiple restatement companies. This study investigates how market penalizes the companies that announce restatement more than once. Differentiating the restatement announcement data based on number of restatement announcements, the results support for non persistence hypothesis that the market has no memory and negative abnormal returns obtained after each of the restatement announcements are completely random. Second essay examines the multiple restatement announcements and its perceived resultant information asymmetry around the announcement day. This study examines the pattern of information asymmetry for these announcements in terms of whether the bid-ask spread widens around the announcement day. The empirical analysis supports the hypotheses that the spread does widen not only around the first restatement announcement day but around every subsequent announcement days as well. The third essay empirically examines the financial and corporate governance characteristics of single and multiple restatement announcements companies. The analysis shows that corporate governance variables influence the occurrence of multiple restatement announcements and can distinguish multiple restatements announcement companies from single restatement announcement companies.
Resumo:
The Last Interglacial (LIG, 129-116 thousand of years BP, ka) represents a test bed for climate model feedbacks in warmer-than-present high latitude regions. However, mainly because aligning different palaeoclimatic archives and from different parts of the world is not trivial, a spatio-temporal picture of LIG temperature changes is difficult to obtain. Here, we have selected 47 polar ice core and sub-polar marine sediment records and developed a strategy to align them onto the recent AICC2012 ice core chronology. We provide the first compilation of high-latitude temperature changes across the LIG associated with a coherent temporal framework built between ice core and marine sediment records. Our new data synthesis highlights non-synchronous maximum temperature changes between the two hemispheres with the Southern Ocean and Antarctica records showing an early warming compared to North Atlantic records. We also observe warmer than present-day conditions that occur for a longer time period in southern high latitudes than in northern high latitudes. Finally, the amplitude of temperature changes at high northern latitudes is larger compared to high southern latitude temperature changes recorded at the onset and the demise of the LIG. We have also compiled four data-based time slices with temperature anomalies (compared to present-day conditions) at 115 ka, 120 ka, 125 ka and 130 ka and quantitatively estimated temperature uncertainties that include relative dating errors. This provides an improved benchmark for performing more robust model-data comparison. The surface temperature simulated by two General Circulation Models (CCSM3 and HadCM3) for 130 ka and 125 ka is compared to the corresponding time slice data synthesis. This comparison shows that the models predict warmer than present conditions earlier than documented in the North Atlantic, while neither model is able to produce the reconstructed early Southern Ocean and Antarctic warming. Our results highlight the importance of producing a sequence of time slices rather than one single time slice averaging the LIG climate conditions.
Resumo:
Recent endogenous processes provide dynamic movements in the lithosphere and generate the varied forms of relief, even in areas of passive continental margins, such as the research area of this work located in northeastern Brazil. The reactivation of Precambrian basement structures, after the breakup between South America and Africa in the Cretaceous played an important role in the evolution of basins, which provided generated forms of relief. These morphodynamic characteristics can be easily observed in marginal basins that exhibit strong evidence fault reactivations. The purpose of this study is to investigate the influence of morphotectonic processes in the landscape structuring of Paraíba Basin. Therefore, we used aeromagnetic, high–resolution images of the Shuttle Radar Topographic Mission–SRTM, structural geological data, deep well data and geological field data. Based on the results of the data was observed that some preexisting structures in the crystalline basement coincide with magnetic and topographic lineaments interpreted as fault reactivation of the Post–Miocene units in the Paraíba Basin. Faults that offset lithostratigraphic units provided evidence that tectonic activity associated with the deposition and erosion in the Paraíba Basin occurred from Cretaceous to the Quaternary. The neotectonic activity that occurred in Paraíba Basin was able to influence the deposition of sedimentary units and landforms. It indicates that the deposition of post–Cretaceous units was influenced by reactivation of Precambrian basement structures in this part of the Brazilian continental margin.
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
This study aims to evaluate the relationship between the export profile and the African GDP growth rate. Chapter 1 presents the literature on the subject and studies that analyze the specific case of Africa. There seems to be a consensus that exports contribute to economic growth. However, there is no consensus on the benefits that are incorporated from exported products. The divergence lies between the approach of the Natural Resources Curse, where concentration of exports in commodities does not contribute to economic growth. Another work line supports the idea there is no such relation. Chapter 2 presents, through descriptive analysis, macroeconomic and international trade data for African economies data. Based on data from 52 countries for the period 1990-2014, it can be observed that the African continent has improved in macroeconomic terms, with increased exports and economic growth rates, suggesting a positive relationship between the variables. Trade indicators show Africa's integration into the global economy, with European Union, USA, China and some emerging countries as main partners. In addition, the analysis showed that the export is concentrated in oil and agricultural commodities. Most African countries face a negative trade balance, depending of primary products exports with low added value and imports of manufactured goods. Finally, Chapter 3 presents an empirical research using panel data analysis. The results suggest, in general, evidences that exports are important for explaining the African economic growth rate of African economies can be stimulated by the expansion of the share of exports in GDP. The estimated coefficients are positive and statistically significant in both the fixed effect estimation, as the estimation by GMM System. The estimation of growth models for fixed or random effects indicates a direct and statistically significant relationship between export oil / minerals and the growth rate of African countries. Thus, the export profile turns out to be important to determine the growth rate. The results obtained from the estimates do not corroborate the literature arguments called Curse of Natural Resources for the period analyzed, since export natural resources, especially oil and minerals, were relevant to explain the performance of the growth rate of economies.
Resumo:
A landfill represents a complex and dynamically evolving structure that can be stochastically perturbed by exogenous factors. Both thermodynamic (equilibrium) and time varying (non-steady state) properties of a landfill are affected by spatially heterogenous and nonlinear subprocesses that combine with constraining initial and boundary conditions arising from the associated surroundings. While multiple approaches have been made to model landfill statistics by incorporating spatially dependent parameters on the one hand (data based approach) and continuum dynamical mass-balance equations on the other (equation based modelling), practically no attempt has been made to amalgamate these two approaches while also incorporating inherent stochastically induced fluctuations affecting the process overall. In this article, we will implement a minimalist scheme of modelling the time evolution of a realistic three dimensional landfill through a reaction-diffusion based approach, focusing on the coupled interactions of four key variables - solid mass density, hydrolysed mass density, acetogenic mass density and methanogenic mass density, that themselves are stochastically affected by fluctuations, coupled with diffusive relaxation of the individual densities, in ambient surroundings. Our results indicate that close to the linearly stable limit, the large time steady state properties, arising out of a series of complex coupled interactions between the stochastically driven variables, are scarcely affected by the biochemical growth-decay statistics. Our results clearly show that an equilibrium landfill structure is primarily determined by the solid and hydrolysed mass densities only rendering the other variables as statistically "irrelevant" in this (large time) asymptotic limit. The other major implication of incorporation of stochasticity in the landfill evolution dynamics is in the hugely reduced production times of the plants that are now approximately 20-30 years instead of the previous deterministic model predictions of 50 years and above. The predictions from this stochastic model are in conformity with available experimental observations.
Resumo:
The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.
Resumo:
Variation of the d13C of living (Rose Bengal stained) deep-sea benthic foraminifera is documented from two deep-water sites (~2430 and ~3010 m) from a northwest Atlantic Ocean study area 275 km south of Nantucket Island. The carbon isotopic data of Hoeglundina elegans and Uvigerina peregrina from five sets of Multicorer and Soutar Box Core samples taken over a 10-month interval (March, May, July, and October 1996 and January 1997) are compared with an 11.5 month time series of organic carbon flux to assess the effect of organic carbon flux on the carbon isotopic composition of dominant taxa. Carbon isotopic data of Hoeglundina elegans at 3010 m show 0.3 per mil lower mean values following an organic carbon flux maximum resulting from a spring phytoplankton bloom. This d13C change following the spring bloom is suggested to be due to the presence of a phytodetritus layer on the seafloor and the subsequent depletion of d13C in the pore waters within the phytodetritus and overlying the sediment surface. Carbon isotopic data of H. elegans from the 2430 m site show an opposite pattern to that found at 3010 m with a d13C enrichment following the spring bloom. This different pattern may be due to spatial variation in phytodetritus deposition and resuspension or to a limited number of specimens recovered from the March 1996 cruise. The d13C of Uvigerina peregrina at 2430 m shows variation over the 10 month interval, but an analysis of variance shows that the variability is more consistent with core and subcore variability than with seasonal changes. The isotopic analyses are grouped into 100 µm size classes on the basis of length measurements of individual specimens to evaluate d13C ontogenetic changes of each species. The data show no consistent patterns between size classes in the d13C of either H. elegans or U. peregrina. These results suggest that variation in organic carbon flux does not preferentially affect particular size classes, nor do d13C ontogenetic changes exist within the >250 to >750 µm size range for these species at this locality. On the basis of the lack of ontogenetic changes a range of sizes of specimens from a sample can be used to reconstruct d13C in paleoceanographic studies. The prediction standard deviation, which is composed of cruise, core, subcore, and residual (replicate) variability, provides an estimate of the magnitude of variability in fossil d13C data; it is 0.27 per mil for H. elegans at 3010 m and 0.4 per mil for U. peregrina at the 2430 m site. Since these standard deviations are based on living specimens, they should be regarded as minimum estimates of variability for fossil data based on single specimen analyses. Most paleoceanographic reconstructions are based on the analysis of multiple specimens, and as a result, the standard error would be expected to be reduced for any particular sample. The reduced standard error resulting from the analysis of multiple specimens would result in the seasonal and spatial variability observed in this study having little impact on carbon isotopic records.
Resumo:
Paleoceanographical studies of Marine Isotope Stage (MIS) 11 have revealed higher-than-present sea surface temperatures (SSTs) in the North Atlantic and in parts of the Arctic, but lower-than-present SSTs in the Nordic Seas, the main throughflow-area of warm water into the Arctic Ocean. We resolve this contradiction by complementing SST data based on planktic foraminiferal abundances with surface salinity changes using hydrogen isotopic compositions of alkenones in a core from the central Nordic Seas. The data indicate the prevalence of a relatively cold, low-salinity, surface water layer in the Nordic Seas during most of MIS 11. In spite of the low-density surface layer, which was kept buoyant by continuous melting of surrounding glaciers, warmer Atlantic water was still propagating northward at the subsurface thus maintaining meridional overturning circulation. This study can help to better constrain the impact of continuous melting of Greenland and Arctic ice on high-latitude ocean circulation and climate.
Resumo:
I denna uppsats skattas betalningsviljan hos besökarna på Peace & Love-festivalen år 2011. Med hjälp av enkätdata baserad på avslöjade och uttalade preferenser presenteras en regressionsanalys med olika oberoende variabler som karaktäriserar en festivalbesökare. Total budget är den beroende variabeln i regressionsanalysen och tolkas i uppsatsen som ekvivalent med besökarnas betalningsvilja. Analysen visar att män i genomsnitt spenderar 301 kronor mer än kvinnor, att turister i genomsnitt spenderar 1 124 kronor mer än en icke-turist samt att den genomsnittliga besökaren har en betalningsvilja på 4 183 kronor. Ett skattat konsumentöverskott har också värderats, vilket uppgick till 743 kronor per person och cirka 37 miljoner kronor totalt för de 50 000 festivalbesökarna. Uppsatsen tar inte hänsyn till de ekonomiska effekter som festivalen har på Borlänge som stad.
Resumo:
Das starke wissenschaftliche Interesse an der relativen Bedeutung genetischer Faktoren und variabler Umweltbedingungen für die Genese interindividueller Persönlichkeitsunterschiede hat in den letzten Jahren zu einer Intensivierung der Zwillings- und Adoptionsforschung geführt. In den meisten dieser Arbeiten wurden und werden bevorzugt Intelligenztests und Persönlichkeitsfragebogen verwendet. Dabei handelt es sich durchweg um Querschnittstudien. Im Vergleich dazu erstreckt sich die Gottschaldsche Längsschnittstudie an eineiigen und zweieiigen Zwillingen inzwischen auf einen Zeitraum von über 55 Jahren. Hinzu kommt, daß neben quantitativen Informationen auch vielfältige qualitative Daten aus Beobachtungen und Befragungen der Probanden verfügbar sind. (DIPF/Orig.)
Resumo:
Background Physical activity in children with intellectual disabilities is a neglected area of study, which is most apparent in relation to physical activity measurement research. Although objective measures, specifically accelerometers, are widely used in research involving children with intellectual disabilities, existing research is based on measurement methods and data interpretation techniques generalised from typically developing children. However, due to physiological and biomechanical differences between these populations, questions have been raised in the existing literature on the validity of generalising data interpretation techniques from typically developing children to children with intellectual disabilities. Therefore, there is a need to conduct population-specific measurement research for children with intellectual disabilities and develop valid methods to interpret accelerometer data, which will increase our understanding of physical activity in this population. Methods Study 1: A systematic review was initially conducted to increase the knowledge base on how accelerometers were used within existing physical activity research involving children with intellectual disabilities and to identify important areas for future research. A systematic search strategy was used to identify relevant articles which used accelerometry-based monitors to quantify activity levels in ambulatory children with intellectual disabilities. Based on best practice guidelines, a novel form was developed to extract data based on 17 research components of accelerometer use. Accelerometer use in relation to best practice guidelines was calculated using percentage scores on a study-by-study and component-by-component basis. Study 2: To investigate the effect of data interpretation methods on the estimation of physical activity intensity in children with intellectual disabilities, a secondary data analysis was conducted. Nine existing sets of child-specific ActiGraph intensity cut points were applied to accelerometer data collected from 10 children with intellectual disabilities during an activity session. Four one-way repeated measures ANOVAs were used to examine differences in estimated time spent in sedentary, moderate, vigorous, and moderate to vigorous intensity activity. Post-hoc pairwise comparisons with Bonferroni adjustments were additionally used to identify where significant differences occurred. Study 3: The feasibility on a laboratory-based calibration protocol developed for typically developing children was investigated in children with intellectual disabilities. Specifically, the feasibility of activities, measurements, and recruitment was investigated. Five children with intellectual disabilities and five typically developing children participated in 14 treadmill-based and free-living activities. In addition, resting energy expenditure was measured and a treadmill-based graded exercise test was used to assess cardiorespiratory fitness. Breath-by-breath respiratory gas exchange and accelerometry were continually measured during all activities. Feasibility was assessed using observations, activity completion rates, and respiratory data. Study 4: Thirty-six children with intellectual disabilities participated in a semi-structured school-based physical activity session to calibrate accelerometry for the estimation of physical activity intensity. Participants wore a hip-mounted ActiGraph wGT3X+ accelerometer, with direct observation (SOFIT) used as the criterion measure. Receiver operating characteristic curve analyses were conducted to determine the optimal accelerometer cut points for sedentary, moderate, and vigorous intensity physical activity. Study 5: To cross-validate the calibrated cut points and compare classification accuracy with existing cut points developed in typically developing children, a sub-sample of 14 children with intellectual disabilities who participated in the school-based sessions, as described in Study 4, were included in this study. To examine the validity, classification agreement was investigated between the criterion measure of SOFIT and each set of cut points using sensitivity, specificity, total agreement, and Cohen’s kappa scores. Results Study 1: Ten full text articles were included in this review. The percentage of review criteria met ranged from 12%−47%. Various methods of accelerometer use were reported, with most use decisions not based on population-specific research. A lack of measurement research, specifically the calibration/validation of accelerometers for children with intellectual disabilities, is limiting the ability of researchers to make appropriate and valid accelerometer use decisions. Study 2: The choice of cut points had significant and clinically meaningful effects on the estimation of physical activity intensity and sedentary behaviour. For the 71-minute session, estimations for time spent in each intensity between cut points ranged from: sedentary = 9.50 (± 4.97) to 31.90 (± 6.77) minutes; moderate = 8.10 (± 4.07) to 40.40 (± 5.74) minutes; vigorous = 0.00 (± .00) to 17.40 (± 6.54) minutes; and moderate to vigorous = 8.80 (± 4.64) to 46.50 (± 6.02) minutes. Study 3: All typically developing participants and one participant with intellectual disabilities completed the protocol. No participant met the maximal criteria for the graded exercise test or attained a steady state during the resting measurements. Limitations were identified with the usability of respiratory gas exchange equipment and the validity of measurements. The school-based recruitment strategy was not effective, with a participation rate of 6%. Therefore, a laboratory-based calibration protocol was not feasible for children with intellectual disabilities. Study 4: The optimal vertical axis cut points (cpm) were ≤ 507 (sedentary), 1008−2300 (moderate), and ≥ 2301 (vigorous). Sensitivity scores ranged from 81−88%, specificity 81−85%, and AUC .87−.94. The optimal vector magnitude cut points (cpm) were ≤ 1863 (sedentary), ≥ 2610 (moderate) and ≥ 4215 (vigorous). Sensitivity scores ranged from 80−86%, specificity 77−82%, and AUC .86−.92. Therefore, the vertical axis cut points provide a higher level of accuracy in comparison to the vector magnitude cut points. Study 5: Substantial to excellent classification agreement was found for the calibrated cut points. The calibrated sedentary cut point (ĸ =.66) provided comparable classification agreement with existing cut points (ĸ =.55−.67). However, the existing moderate and vigorous cut points demonstrated low sensitivity (0.33−33.33% and 1.33−53.00%, respectively) and disproportionately high specificity (75.44−.98.12% and 94.61−100.00%, respectively), indicating that cut points developed in typically developing children are too high to accurately classify physical activity intensity in children with intellectual disabilities. Conclusions The studies reported in this thesis are the first to calibrate and validate accelerometry for the estimation of physical activity intensity in children with intellectual disabilities. In comparison with typically developing children, children with intellectual disabilities require lower cut points for the classification of moderate and vigorous intensity activity. Therefore, generalising existing cut points to children with intellectual disabilities will underestimate physical activity and introduce systematic measurement error, which could be a contributing factor to the low levels of physical activity reported for children with intellectual disabilities in previous research.
Resumo:
Internally-grooved refrigeration tubes maximize tube-side evaporative heat transfer rates and have been identified as a most promising technology for integration into compact cold plates. Unfortunately, the absence of phenomenological insights and physical models hinders the extrapolation of grooved-tube performance to new applications. The success of regime-based heat transfer correlations for smooth tubes has motivated the current effort to explore the relationship between flow regimes and enhanced heat transfer in internally-grooved tubes. In this thesis, a detailed analysis of smooth and internally-grooved tube data reveals that performance improvement in internally-grooved tubes at low-to-intermediate mass flux is a result of early flow regime transition. Based on this analysis, a new flow regime map and corresponding heat transfer coefficient correlation, which account for the increased wetted angle, turbulence, and Gregorig effects unique to internally-grooved tubes, were developed. A two-phase test facility was designed and fabricated to validate the newly-developed flow regime map and regime-based heat transfer coefficient correlation. As part of this setup, a non-intrusive optical technique was developed to study the dynamic nature of two-phase flows. It was found that different flow regimes result in unique temporally varying film thickness profiles. Using these profiles, quantitative flow regime identification measures were developed, including the ability to explain and quantify the more subtle transitions that exist between dominant flow regimes. Flow regime data, based on the newly-developed method, and heat transfer coefficient data, using infrared thermography, were collected for two-phase HFE-7100 flow in horizontal 2.62mm - 8.84mm diameter smooth and internally-grooved tubes with mass fluxes from 25-300 kg/m²s, heat fluxes from 4-56 kW/m², and vapor qualities approaching 1. In total, over 6500 combined data points for the adiabatic and diabatic smooth and internally-grooved tubes were acquired. Based on results from the experiments and a reinterpretation of data from independent researchers, it was established that heat transfer enhancement in internally-grooved tubes at low-to-intermediate mass flux is primarily due to early flow regime transition to Annular flow. The regime-based heat transfer coefficient outperformed empirical correlations from the literature, with mean and absolute deviations of 4.0% and 32% for the full range of data collected.
Resumo:
In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a model’s variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.