34 resultados para Flynn and Wall kinetic model
Resumo:
Neuroenhancement (NE), the use of substances as a means to enhance performance, has garnered considerable scientific attention of late. While ethical and epidemiological publications on the topic accumulate, there is a lack of theory-driven psychological research that aims at understanding psychological drivers of NE. In this perspective article we argue that self-control strength offers a promising theory-based approach to further understand and investigate NE behavior. Using the strength model of self-control, we derive two theory-driven perspectives on NE-self-control research. First, we propose that individual differences in state/trait self-control strength differentially affect NE behavior based on one’s individual experience of NE use. Building upon this, we outline promising research questions that (will) further elucidate our understanding of NE based on the strength model’s propositions. Second, we discuss evidence indicating that popular NE substances (like Methylphenidate) may counteract imminent losses of self-control strength. We outline how further research on NE’s effects on the ego-depletion effect may further broaden our understanding of the strength model of self-control.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.
Resumo:
The vitamin D(3) and nicotine (VDN) model is a model of isolated systolic hypertension (ISH) due to arterial calcification raising arterial stiffness and vascular impedance similar to an aged and stiffened arterial tree. We therefore analyzed the impact of this aging model on normal and diseased hearts with myocardial infarction (MI). Wistar rats were treated with VDN (n = 9), subjected to MI by coronary ligation (n = 10), or subjected to a combination of both MI and VDN treatment (VDN/MI, n = 14). A sham-treated group served as control (Ctrl, n = 10). Transthoracic echocardiography was performed every 2 wk, whereas invasive indexes were obtained at week 8 before death. Calcium, collagen, and protein contents were measured in the heart and the aorta. Systolic blood pressure, pulse pressure, thoracic aortic calcium, and end-systolic elastance as an index of myocardial contractility were highest in the aging model group compared with MI and Ctrl groups (P(VDN) < 0.05, 2-way ANOVA). Left ventricular wall stress and brain natriuretic peptide (P(VDNxMI) = not significant) were highest, while ejection fraction, stroke volume, and cardiac output were lowest in the combined group versus all other groups (P(VDNxMI) < 0.05). The combination of ISH due to this aging model and MI demonstrates significant alterations in cardiac function. This model mimics several clinical phenomena of cardiovascular aging and may thus serve to further study novel therapies.
Resumo:
The study conducted in a bacterial-based in vitro caries model aimed to determine whether typical inner secondary caries lesions can be detected at cavity walls of restorations with selected gap widths when the development of outer lesions is inhibited. Sixty bovine tooth specimens were randomly assigned to the following groups: test group 50 (TG50; gap, 50 microm), test group 100 (TG100; gap, 100 microm), test group 250 (TG250; gap, 250 microm) and a control group (CG; gap, 250 microm). The outer tooth surface of the test group specimens was covered with an acid-resistant varnish to inhibit the development of an outer caries lesion. After incubation in the caries model, the area of demineralization at the cavity wall was determined by confocal laser scanning microscopy. All test group specimens demonstrated only wall lesions. The CG specimens developed outer and wall lesions. The TG250 specimens showed significantly less wall lesion area compared to the CG (p < 0.05). In the test groups, a statistically significant increase (p < 0.05) in lesion area could be detected in enamel between TG50 and TG250 and in dentine between TG50 and TG100. In conclusion, the inner wall lesions of secondary caries can develop without the presence of outer lesions and therefore can be regarded as an entity on their own. The extent of independently developed wall lesions increased with gap width in the present setting.
Resumo:
Laser tissue soldering (LTS) is a promising technique for tissue fusion based on a heat-denaturation process of proteins. Thermal damage of the fused tissue during the laser procedure has always been an important and challenging problem. Particularly in LTS of arterial blood vessels strong heating of the endothelium should be avoided to minimize the risk of thrombosis. A precise knowledge of the temperature distribution within the vessel wall during laser irradiation is inevitable. The authors developed a finite element model (FEM) to simulate the temperature distribution within blood vessels during LTS. Temperature measurements were used to verify and calibrate the model. Different parameters such as laser power, solder absorption coefficient, thickness of the solder layer, cooling of the vessel and continuous vs. pulsed energy deposition were tested to elucidate their impact on the temperature distribution within the soldering joint in order to reduce the amount of further animal experiments. A pulsed irradiation with high laser power and high absorbing solder yields the best results.
Resumo:
The vitamin D(3) and nicotine (VDN) model is one of isolated systolic hypertension (ISH) in which arterial calcification raises arterial stiffness and vascular impedance. The effects of VDN treatment on arterial and cardiac hemodynamics have been investigated; however, a complete analysis of ventricular-arterial interaction is lacking. Wistar rats were treated with VDN (VDN group, n = 9), and a control group (n = 10) was included without the VDN. At week 8, invasive indexes of cardiac function were obtained using a conductance catheter. Simultaneously, aortic pressure and flow were measured to derive vascular impedance and characterize ventricular-vascular interaction. VDN caused significant increases in systolic (138 +/- 6 vs. 116 +/- 13 mmHg, P < 0.01) and pulse (42 +/- 10 vs. 26 +/- 4 mmHg, P < 0.01) pressures with respect to control. Total arterial compliance decreased (0.12 +/- 0.08 vs. 0.21 +/- 0.04 ml/mmHg in control, P < 0.05), and pulse wave velocity increased significantly (8.8 +/- 2.5 vs. 5.1 +/- 2.0 m/s in control, P < 0.05). The arterial elastance and end-systolic elastance rose significantly in the VDN group (P < 0.05). Wave reflection was augmented in the VDN group, as reflected by the increase in the wave reflection coefficient (0.63 +/- 0.06 vs. 0.52 +/- 0.05 in control, P < 0.05) and the amplitude of the reflected pressure wave (13.3 +/- 3.1 vs. 8.4 +/- 1.0 mmHg in control, P < 0.05). We studied ventricular-arterial coupling in a VDN-induced rat model of reduced arterial compliance. The VDN treatment led to development of ISH and provoked alterations in cardiac function, arterial impedance, arterial function, and ventricular-arterial interaction, which in many aspects are similar to effects of an aged and stiffened arterial tree.
Resumo:
This paper describes the Model for Outcome Classification in Health Promotion and Prevention adopted by Health Promotion Switzerland (SMOC, Swiss Model for Outcome Classification) and the process of its development. The context and method of model development, and the aim and objectives of the model are outlined. Preliminary experience with application of the model in evaluation planning and situation analysis is reported. On the basis of an extensive literature search, the model is situated within the wider international context of similar efforts to meet the challenge of developing tools to assess systematically the activities of health promotion and prevention.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
Our knowledge about the lunar environment is based on a large volume of ground-based, remote, and in situ observations. These observations have been conducted at different times and sampled different pieces of such a complex system as the surface-bound exosphere of the Moon. Numerical modeling is the tool that can link results of these separate observations into a single picture. Being validated against previous measurements, models can be used for predictions and interpretation of future observations results. In this paper we present a kinetic model of the sodium exosphere of the Moon as well as results of its validation against a set of ground-based and remote observations. The unique characteristic of the model is that it takes the orbital motion of the Moon and the Earth into consideration and simulates both the exosphere as well as the sodium tail self-consistently. The extended computational domain covers the part of the Earth’s orbit at new Moon, which allows us to study the effect of Earth’s gravity on the lunar sodium tail. The model is fitted to a set of ground-based and remote observations by tuning sodium source rate as well as values of sticking, and accommodation coefficients. The best agreement of the model results with the observations is reached when all sodium atoms returning from the exosphere stick to the surface and the net sodium escape rate is about 5.3 × 1022 s−1.
Resumo:
The influence of sea surface temperature (SST) anomalies on the hurricane characteristics are investigated in a set of sensitivity experiments employing the Weather Research and Forecasting (WRF) model. The idealised experiments are performed for the case of Hurricane Katrina in 2005. The first set of sensitivity experiments with basin-wide changes of the SST magnitude shows that the intensity goes along with changes in the SST, i.e., an increase in SST leads to an intensification of Katrina. Additionally, the trajectory is shifted to the west (east), with increasing (decreasing) SSTs. The main reason is a strengthening of the background flow. The second set of experiments investigates the influence of Loop Current eddies idealised by localised SST anomalies. The intensity of Hurricane Katrina is enhanced with increasing SSTs close to the core of a tropical cyclone. Negative nearby SST anomalies reduce the intensity. The trajectory only changes if positive SST anomalies are located west or north of the hurricane centre. In this case the hurricane is attracted by the SST anomaly which causes an additional moisture source and increased vertical winds.
Resumo:
The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.
Resumo:
Mountain vegetation is strongly affected by temperature and is expected to shift upwards with climate change. Dynamic vegetation models are often used to assess the impact of climate on vegetation and model output can be compared with paleobotanical data as a reality check. Recent paleoecological studies have revealed regional variation in the upward shift of timberlines in the Northern and Central European Alps in response to rapid warming at the Younger Dryas/Preboreal transition ca. 11700years ago, probably caused by a climatic gradient across the Alps. This contrasts with previous studies that successfully simulated the early Holocene afforestation in the (warmer) Central Alps with a chironomid-inferred temperature reconstruction from the (colder) Northern Alps. We use LandClim, a dynamic landscape vegetation model to simulate mountain forests under different temperature, soil and precipitation scenarios around Iffigsee (2065m a.s.l.) a lake in the Northwestern Swiss Alps, and compare the model output with the paleobotanical records. The model clearly overestimates the upward shift of timberline in a climate scenario that applies chironomid-inferred July-temperature anomalies to all months. However, forest establishment at 9800 cal. BP at Iffigsee is successfully simulated with lower moisture availability and monthly temperatures corrected for stronger seasonality during the early Holocene. The model-data comparison reveals a contraction in the realized niche of Abies alba due to the prominent role of anthropogenic disturbance after ca. 5000 cal. BP, which has important implications for species distribution models (SDMs) that rely on equilibrium with climate and niche stability. Under future climate projections, LandClim indicates a rapid upward shift of mountain vegetation belts by ca. 500m and treeline positions of ca. 2500m a.s.l. by the end of this century. Resulting biodiversity losses in the alpine vegetation belt might be mitigated with low-impact pastoralism to preserve species-rich alpine meadows.
Resumo:
This study examines how different microphysical parameterization schemes influence orographically induced precipitation and the distributions of hydrometeors and water vapour for midlatitude summer conditions in the Weather Research and Forecasting (WRF) model. A high-resolution two-dimensional idealized simulation is used to assess the differences between the schemes in which a moist air flow is interacting with a bell-shaped 2 km high mountain. Periodic lateral boundary conditions are chosen to recirculate atmospheric water in the domain. It is found that the 13 selected microphysical schemes conserve the water in the model domain. The gain or loss of water is less than 0.81% over a simulation time interval of 61 days. The differences of the microphysical schemes in terms of the distributions of water vapour, hydrometeors and accumulated precipitation are presented and discussed. The Kessler scheme, the only scheme without ice-phase processes, shows final values of cloud liquid water 14 times greater than the other schemes. The differences among the other schemes are not as extreme, but still they differ up to 79% in water vapour, up to 10 times in hydrometeors and up to 64% in accumulated precipitation at the end of the simulation. The microphysical schemes also differ in the surface evaporation rate. The WRF single-moment 3-class scheme has the highest surface evaporation rate compensated by the highest precipitation rate. The different distributions of hydrometeors and water vapour of the microphysical schemes induce differences up to 49 W m−2 in the downwelling shortwave radiation and up to 33 W m−2 in the downwelling longwave radiation.