967 resultados para Minimum Variance Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research suggests that the personality of a relationship partner predicts not only the individual’s own satisfaction with the relationship but also the partner’s satisfaction. Based on the actor–partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiocarbon production, solar activity, total solar irradiance (TSI) and solar-induced climate change are reconstructed for the Holocene (10 to 0 kyr BP), and TSI is predicted for the next centuries. The IntCal09/SHCal04 radiocarbon and ice core CO2 records, reconstructions of the geomagnetic dipole, and instrumental data of solar activity are applied in the Bern3D-LPJ, a fully featured Earth system model of intermediate complexity including a 3-D dynamic ocean, ocean sediments, and a dynamic vegetation model, and in formulations linking radiocarbon production, the solar modulation potential, and TSI. Uncertainties are assessed using Monte Carlo simulations and bounding scenarios. Transient climate simulations span the past 21 thousand years, thereby considering the time lags and uncertainties associated with the last glacial termination. Our carbon-cycle-based modern estimate of radiocarbon production of 1.7 atoms cm−2 s−1 is lower than previously reported for the cosmogenic nuclide production model by Masarik and Beer (2009) and is more in-line with Kovaltsov et al. (2012). In contrast to earlier studies, periods of high solar activity were quite common not only in recent millennia, but throughout the Holocene. Notable deviations compared to earlier reconstructions are also found on decadal to centennial timescales. We show that earlier Holocene reconstructions, not accounting for the interhemispheric gradients in radiocarbon, are biased low. Solar activity is during 28% of the time higher than the modern average (650 MeV), but the absolute values remain weakly constrained due to uncertainties in the normalisation of the solar modulation to instrumental data. A recently published solar activity–TSI relationship yields small changes in Holocene TSI of the order of 1 W m−2 with a Maunder Minimum irradiance reduction of 0.85 ± 0.16 W m−2. Related solar-induced variations in global mean surface air temperature are simulated to be within 0.1 K. Autoregressive modelling suggests a declining trend of solar activity in the 21st century towards average Holocene conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to elucidate the impact of changes in solar irradiance and energetic particles versus volcanic eruptions on tropospheric global climate during the Dalton Minimum (DM, AD 1780–1840). Separate variations in the (i) solar irradiance in the UV-C with wavelengths λ < 250 nm, (ii) irradiance at wavelengths λ > 250 nm, (iii) in energetic particle spectrum, and (iv) volcanic aerosol forcing were analyzed separately, and (v) in combination, by means of small ensemble calculations using a coupled atmosphere–ocean chemistry–climate model. Global and hemispheric mean surface temperatures show a significant dependence on solar irradiance at λ > 250 nm. Also, powerful volcanic eruptions in 1809, 1815, 1831 and 1835 significantly decreased global mean temperature by up to 0.5 K for 2–3 years after the eruption. However, while the volcanic effect is clearly discernible in the Southern Hemispheric mean temperature, it is less significant in the Northern Hemisphere, partly because the two largest volcanic eruptions occurred in the SH tropics and during seasons when the aerosols were mainly transported southward, partly because of the higher northern internal variability. In the simulation including all forcings, temperatures are in reasonable agreement with the tree ring-based temperature anomalies of the Northern Hemisphere. Interestingly, the model suggests that solar irradiance changes at λ < 250 nm and in energetic particle spectra have only an insignificant impact on the climate during the Dalton Minimum. This downscales the importance of top–down processes (stemming from changes at λ < 250 nm) relative to bottom–up processes (from λ > 250 nm). Reduction of irradiance at λ > 250 nm leads to a significant (up to 2%) decrease in the ocean heat content (OHC) between 0 and 300 m in depth, whereas the changes in irradiance at λ < 250 nm or in energetic particles have virtually no effect. Also, volcanic aerosol yields a very strong response, reducing the OHC of the upper ocean by up to 1.5%. In the simulation with all forcings, the OHC of the uppermost levels recovers after 8–15 years after volcanic eruption, while the solar signal and the different volcanic eruptions dominate the OHC changes in the deeper ocean and prevent its recovery during the DM. Finally, the simulations suggest that the volcanic eruptions during the DM had a significant impact on the precipitation patterns caused by a widening of the Hadley cell and a shift in the intertropical convergence zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Whole Atmosphere Community Climate Model (WACCM) is utilised to study the daily ozone cycle and underlying photochemical and dynamical processes. The analysis is focused on the daily ozone cycle in the middle stratosphere at 5 hPa where satellite-based trend estimates of stratospheric ozone are most biased by diurnal sampling effects and drifting satellite orbits. The simulated ozone cycle shows a minimum after sunrise and a maximum in the late afternoon. Further, a seasonal variation of the daily ozone cycle in the stratosphere was found. Depending on season and latitude, the peak-to-valley difference of the daily ozone cycle varies mostly between 3 and 5% (0.4 ppmv) with respect to the midnight ozone volume mixing ratio. The maximal variation of 15% (0.8 ppmv) is found at the polar circle in summer. The global pattern of the strength of the daily ozone cycle is mainly governed by the solar zenith angle and the sunshine duration. In addition, we find synoptic-scale variations in the strength of the daily ozone cycle. These variations are often anti-correlated to regional temperature anomalies and are due to the temperature dependence of the rate coefficients k2 and k3 of the Chapman cycle reactions. Further, the NOx catalytic cycle counteracts the accumulation of ozone during daytime and leads to an anti-correlation between anomalies in NOx and the strength of the daily ozone cycle. Similarly, ozone recombines with atomic oxygen which leads to an anti-correlation between anomalies in ozone abundance and the strength of the daily ozone cycle. At higher latitudes, an increase of the westerly (easterly) wind cause a decrease (increase) in the sunshine duration of an air parcel leading to a weaker (stronger) daily ozone cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The newly developed atmosphere–ocean-chemistry-climate model SOCOL-MPIOM is presented by demonstrating the influence of the interactive chemistry module on the climate state and the variability. Therefore, we compare pre-industrial control simulations with (CHEM) and without (NOCHEM) interactive chemistry. In general, the influence of the chemistry on the mean state and the variability is small and mainly restricted to the stratosphere and mesosphere. The largest differences are found for the atmospheric dynamics in the polar regions, with slightly stronger northern and southern winter polar vortices in CHEM. The strengthening of the vortex is related to larger stratospheric temperature gradients, which are attributed to a parametrization of the absorption of ozone and oxygen in the Lyman-alpha, Schumann–Runge, Hartley, and Higgins bands. This effect is parametrized in the version with interactive chemistry only. A second reason for the temperature differences between CHEM and NOCHEM is related to diurnal variations in the ozone concentrations in the higher atmosphere, which are missing in NOCHEM. Furthermore, stratospheric water vapour concentrations differ substantially between the two experiments, but their effect on the temperatures is small. In both setups, the simulated intensity and variability of the northern polar vortex is inside the range of present day observations. Sudden stratospheric warming events are well reproduced in terms of their frequency, but the distribution amongst the winter months is too uniform. Additionally, the performance of SOCOL-MPIOM under changing external forcings is assessed for the period 1600–2000 using an ensemble of simulations driven by a spectral solar forcing reconstruction. The amplitude of the reconstruction is large in comparison to other state-of-the-art reconstructions, providing an upper limit for the importance of the solar signal. In the pre-industrial period (1600–1850) the simulated surface temperature trends are in reasonable agreement with temperature reconstructions, although the multi-decadal variability is more pronounced. This enhanced variability can be attributed to the variability in the solar forcing. The simulated temperature reductions during the Maunder Minimum are in the lowest probability range of the proxy records. During the Dalton Minimum, when also volcanic forcing is an important driver of temperature variations, the agreement is better. In the industrial period from 1850 onward SOCOL-MPIOM overestimates the temperature increase in comparison to observational data sets. Sensitivity simulations show that this overestimation can be attributed to the increasing trend in the solar forcing reconstruction that is used in this study and an additional warming induced by the simulated ozone changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strengthening car drivers’ intention to prevent road-traffic noise is a first step toward noise abatement through voluntary change of behavior. We analyzed predictors of this intention based on the norm activation model (i.e., personal norm, problem awareness, awareness of consequences, social norm, and value orientations). Moreover, we studied the effects of noise exposure, noise sensitivity, and noise annoyance on problem awareness. Data came from 1,002 car drivers who participated in a two-wave longitudinal survey over 4 months. Personal norm had a large prospective effect on intention, even when the previous level of intention was controlled for, and mediated the effect of all other variables on intention. Almost 60% of variance in personal norm was explained by problem awareness, social norm, and biospheric value orientation. The effects of noise sensitivity and noise exposure on problem awareness were small and mediated by noise annoyance. We propose four communication strategies for strengthening the intention to prevent road-traffic noise in car drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent findings demonstrate that trees in deserts are efficient carbon sinks. It remains however unknown whether the Clean Development Mechanism will accelerate the planting of trees in Non Annex I dryland countries. We estimated the price of carbon at which a farmer would be indifferent between his customary activity and the planting of trees to trade carbon credits, along an aridity gradient. Carbon yields were simulated by means of the CO2FIX v3.1 model for Pinus halepensis with its respective yield classes along the gradient (Arid – 100mm to Dry Sub Humid conditions – 900mm). Wheat and pasture yields were predicted on somewhat similar nitrogen-based quadratic models, using 30 years of weather data to simulate moisture stress. Stochastic production, input and output prices were afterwards simulated on a Monte Carlo matrix. Results show that, despite the high levels of carbon uptake, carbon trading by afforesting is unprofitable anywhere along the gradient. Indeed, the price of carbon would have to raise unrealistically high, and the certification costs would have to drop significantly, to make the Clean Development Mechanism worthwhile for non annex I dryland countries farmers. From a government agency's point of view the Clean Development Mechanism is attractive. However, such agencies will find it difficult to demonstrate “additionality”, even if the rule may be somewhat flexible. Based on these findings, we will further discuss why the Clean Development Mechanism, a supposedly pro-poor instrument, fails to assist farmers in Non Annex I dryland countries living at minimum subsistence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper considers panel data methods for estimating ordered logit models with individual-specific correlated unobserved heterogeneity. We show that a popular approach is inconsistent, whereas some consistent and efficient estimators are available, including minimum distance and generalized method-of-moment estimators. A Monte Carlo study reveals the good properties of an alternative estimator that has not been considered in econometric applications before, is simple to implement and almost as efficient. An illustrative application based on data from the German Socio-Economic Panel confirms the large negative effect of unemployment on life satisfaction that has been found in the previous literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout the last millennium, mankind was affected by prolonged deviations from the climate mean state. While periods like the Maunder Minimum in the 17th century have been assessed in greater detail, earlier cold periods such as the 15th century received much less attention due to the sparse information available. Based on new evidence from different sources ranging from proxy archives to model simulations, it is now possible to provide an end-to-end assessment about the climate state during an exceptionally cold period in the 15th century, the role of internal, unforced climate variability and external forcing in shaping these extreme climatic conditions, and the impacts on and responses of the medieval society in Central Europe. Climate reconstructions from a multitude of natural and human archives indicate that, during winter, the period of the early Spörer Minimum (1431–1440 CE) was the coldest decade in Central Europe in the 15th century. The particularly cold winters and normal but wet summers resulted in a strong seasonal cycle that challenged food production and led to increasing food prices, a subsistence crisis, and a famine in parts of Europe. As a consequence, authorities implemented adaptation measures, such as the installation of grain storage capacities, in order to be prepared for future events. The 15th century is characterised by a grand solar minimum and enhanced volcanic activity, which both imply a reduction of seasonality. Climate model simulations show that periods with cold winters and strong seasonality are associated with internal climate variability rather than external forcing. Accordingly, it is hypothesised that the reconstructed extreme climatic conditions during this decade occurred by chance and in relation to the partly chaotic, internal variability within the climate system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchically clustered populations are often encountered in public health research, but the traditional methods used in analyzing this type of data are not always adequate. In the case of survival time data, more appropriate methods have only begun to surface in the last couple of decades. Such methods include multilevel statistical techniques which, although more complicated to implement than traditional methods, are more appropriate. ^ One population that is known to exhibit a hierarchical structure is that of patients who utilize the health care system of the Department of Veterans Affairs where patients are grouped not only by hospital, but also by geographic network (VISN). This project analyzes survival time data sets housed at the Houston Veterans Affairs Medical Center Research Department using two different Cox Proportional Hazards regression models, a traditional model and a multilevel model. VISNs that exhibit significantly higher or lower survival rates than the rest are identified separately for each model. ^ In this particular case, although there are differences in the results of the two models, it is not enough to warrant using the more complex multilevel technique. This is shown by the small estimates of variance associated with levels two and three in the multilevel Cox analysis. Much of the differences that are exhibited in identification of VISNs with high or low survival rates is attributable to computer hardware difficulties rather than to any significant improvements in the model. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper revisits the issue of conditional volatility in real GDP growth rates for Canada, Japan, the United Kingdom, and the United States. Previous studies find high persistence in the volatility. This paper shows that this finding largely reflects a nonstationary variance. Output growth in the four countries became noticeably less volatile over the past few decades. In this paper, we employ the modified ICSS algorithm to detect structural change in the unconditional variance of output growth. One structural break exists in each of the four countries. We then use generalized autoregressive conditional heteroskedasticity (GARCH) specifications modeling output growth and its volatility with and without the break in volatility. The evidence shows that the time-varying variance falls sharply in Canada, Japan, and the U.K. and disappears in the U.S., excess kurtosis vanishes in Canada, Japan, and the U.S. and drops substantially in the U.K., once we incorporate the break in the variance equation of output for the four countries. That is, the integrated GARCH (IGARCH) effect proves spurious and the GARCH model demonstrates misspecification, if researchers neglect a nonstationary unconditional variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With substance abuse treatment expanding in prisons and jails, understanding how behavior change interacts with a restricted setting becomes more essential. The Transtheoretical Model (TTM) has been used to understand intentional behavior change in unrestricted settings, however, evidence indicates restrictive settings can affect the measurement and structure of the TTM constructs. The present study examined data from problem drinkers at baseline and end-of-treatment from three studies: (1) Project CARE (n = 187) recruited inmates from a large county jail; (2) Project Check-In (n = 116) recruited inmates from a state prison; (3) Project MATCH, a large multi-site alcohol study had two recruitment arms, aftercare (n = 724 pre-treatment and 650 post-treatment) and outpatient (n = 912 pre-treatment and 844 post-treatment). The analyses were conducted using cross-sectional data to test for non-invariance of measures of the TTM constructs: readiness, confidence, temptation, and processes of change (Structural Equation Modeling, SEM) across restricted and unrestricted settings. Two restricted (jail and aftercare) and one unrestricted group (outpatient) entering treatment and one restricted (prison) and two unrestricted groups (aftercare and outpatient) at end-of-treatment were contrasted. In addition TTM end-of-treatment profiles were tested as predictors of 12 month drinking outcomes (Profile Analysis). Although SEM did not indicate structural differences in the overall TTM construct model across setting types, there were factor structure differences on the confidence and temptation constructs at pre-treatment and in the factor structure of the behavioral processes at the end-of-treatment. For pre-treatment temptation and confidence, differences were found in the social situations factor loadings and in the variance for the confidence and temptation latent factors. For the end-of-treatment behavioral processes, differences across the restricted and unrestricted settings were identified in the counter-conditioning and stimulus control factor loadings. The TTM end-of-treatment profiles were not predictive of drinking outcomes in the prison sample. Both pre and post-treatment differences in structure across setting types involved constructs operationalized with behaviors that are limited for those in restricted settings. These studies suggest the TTM is a viable model for explicating addictive behavior change in restricted settings but calls for modification of subscale items that refer to specific behaviors and caution in interpreting the mean differences across setting types for problem drinkers. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Institutional Review Boards (IRBs) are the primary gatekeepers for the protection of ethical standards of federally regulated research on human subjects in this country. This paper focuses on what general, broad measures that may be instituted or enhanced to exemplify a "model IRB". This is done by examining the current regulatory standards of federally regulated IRBs, not private or commercial boards, and how many of those standards have been found either inadequate or not generally understood or followed. The analysis includes suggestions on how to bring about changes in order to make the IRB process more efficient, less subject to litigation, and create standardized educational protocols for members. The paper also considers how to include better oversight for multi-center research, increased centralization of IRBs, utilization of Data Safety Monitoring Boards when necessary, payment for research protocol review, voluntary accreditation, and the institution of evaluation/quality assurance programs. ^ This is a policy study utilizing secondary analysis of publicly available data. Therefore, the research for this paper focuses on scholarly medical/legal journals, web information from the Department of Health and Human Services, Federal Drug Administration, and the Office of the Inspector General, Accreditation Programs, law review articles, and current regulations applicable to the relevant portions of the paper. ^ Two issues are found to be consistently cited by the literature as major concerns. One is a need for basic, standardized educational requirements across all IRBs and its members, and secondly, much stricter and more informed management of continuing research. There is no federally regulated formal education system currently in place for IRB members, except for certain NIH-based trials. Also, IRBs are not keeping up with research once a study has begun, and although regulated to do so, it does not appear to be a great priority. This is the area most in danger of increased litigation. Other issues such as voluntary accreditation and outcomes evaluation are slowing gaining steam as the processes are becoming more available and more sought after, such as JCAHO accrediting of hospitals. ^ Adopting the principles discussed in this paper should promote better use of a local IRBs time, money, and expertise for protecting the vulnerable population in their care. Without further improvements to the system, there is concern that private and commercial IRBs will attempt to create a monopoly on much of the clinical research in the future as they are not as heavily regulated and can therefore offer companies quicker and more convenient reviews. IRBs need to consider the advantages of charging for their unique and important services as a cost of doing business. More importantly, there must be a minimum standard of education for all IRB members in the area of the ethical standards of human research and a greater emphasis placed on the follow-up of ongoing research as this is the most critical time for study participants and may soon lead to the largest area for litigation. Additionally, there should be a centralized IRB for multi-site trials or a study website with important information affecting the trial in real time. There needs to be development of standards and metrics to assess the performance of the IRBs for quality assurance and outcome evaluations. The boards should not be content to run the business of human subjects' research without determining how well that function is actually being carried out. It is important that federally regulated IRBs provide excellence in human research and promote those values most important to the public at large.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^