945 resultados para three time scales
Resumo:
Lake sturgeon (Acipenser fulvescens) were historically abundant in the Huron-Erie Corridor (HEC), a 160 km river/channel network composed of the St. Clair River, Lake St. Clair, and the Detroit River that connects Lake Huron to Lake Erie. In the HEC, most natural lake sturgeon spawning substrates have been eliminated or degraded as a result of channelization and dredging. To address significant habitat loss in HEC, multi-agency restoration efforts are underway to restore spawning substrate by constructing artificial spawning reefs. The main objective of this study was to conduct post-construction monitoring of lake sturgeon egg deposition and larval emergence near two of these artificial reef projects; Fighting Island Reef in the Detroit River, and Middle Channel Spawning Reef in the lower St. Clair River. We also investigated seasonal and nightly timing of larval emergence, growth, and vertical distribution in the water column at these sites, and an additional site in the St. Clair River where lake sturgeon are known to spawn on a bed of ~100 year old coal clinkers. From 2010-12, we collected viable eggs and larvae at all three sites indicating that these artificial reefs are creating conditions suitable for egg deposition, fertilization, incubation, and larval emergence. The construction methods and materials, and physical site conditions present in HEC artificial reef projects can be used to inform future spawning habitat restoration or enhancement efforts. The results from this study have also identified the likelihood of additional uncharacterized natural spawning sites in the St. Clair River. In addition to the field study, we conducted a laboratory experiment involving actual substrate materials that have been used in artificial reef construction in this system. Although coal clinkers are chemically inert, some trace elements can be reincorporated with the clinker material during the combustion process. Since lake sturgeon eggs and larvae are developing in close proximity to this material, it is important to measure the concentration of potentially toxic trace elements. This study focused on arsenic, which occurs naturally in coal and can be toxic to fishes. Total arsenic concentration was measured in samples taken from four substrate treatments submerged in distilled water; limestone cobble, rinsed limestone cobble, coal clinker, and rinsed coal clinker. Samples were taken at three time intervals: 24 hours, 11 days, and 21 days. ICP-MS analysis showed that concentrations of total arsenic were below the EPA drinking water standard (10 ppb) for all samples. However, at the 24 hour sampling interval, a two way repeated measures ANOVA with a Holm-Sidak post hoc analysis (α= 0.05) showed that the mean arsenic concentration was significantly higher in the coal clinker substrate treatment then in the rinsed coal clinker treatment (p=0.006), the limestone cobble treatment (p
Resumo:
Soils are the largest sinks of carbon in terrestrial ecosystems. Soil organic carbon is important for ecosystem balance as it supplies plants with nutrients, maintains soil structure, and helps control the exchange of CO2 with the atmosphere. The processes in which wood carbon is stabilized and destabilized in forest soils is still not understood completely. This study attempts to measure early wood decomposition by different fungal communities (inoculation with pure colonies of brown or white rot, or the original microbial community) under various interacting treatments: wood quality (wood from +CO2, +CO2+O3, or ambient atmosphere Aspen-FACE treatments from Rhinelander, WI), temperature (ambient or warmed), soil texture (loamy or sandy textured soil), and wood location (plot surface or buried 15cm below surface). Control plots with no wood chips added were also monitored throughout the study. By using isotopically-labelled wood chips from the Aspen-FACE experiment, we are able to track wood-derived carbon losses as soil CO2 efflux and as leached dissolved organic carbon (DOC). We analyzed soil water for chemical characteristics such as, total phenolics, SUVA254, humification, and molecular size. Wood chip samples were also analyzed for their proportion of lignin:carbohydrates using FTIR analysis at three time intervals throughout 12 months of decomposition. After two years of measurements, the average total soil CO2 efflux rates were significantly different depending on wood location, temperature, and wood quality. The wood-derived portion soil CO2 efflux also varied significantly by wood location, temperature, and wood quality. The average total DOC and the wood-derived portion of DOC differed between inoculation treatments, wood location, and temperature. Soil water chemical characteristics varied significantly by inoculation treatments, temperature, and wood quality. After 12 months of decomposition the proportion of lignin:carbohydrates varied significantly by inoculation treatment, with white rot having the only average proportional decrease in lignin:carbohydrates. Both soil CO2 efflux and DOC losses indicate that wood location is important. Carbon losses were greater from surface wood chips compared with buried wood chips, implying the importance of buried wood for total ecosystem carbon stabilization. Treatments associated with climate change also had an effect on the level of decomposition. DOC losses, soil water characteristics, and FTIR data demonstrate the importance of fungal community on the degree of decomposition and the resulting byproducts found throughout the soil.
Resumo:
Fluvial sediment transport is controlled by hydraulics, sediment properties and arrangement, and flow history across a range of time scales. This physical complexity has led to ambiguous definition of the reference frame (Lagrangian or Eulerian) in which sediment transport is analysed. A general Eulerian-Lagrangian approach accounts for inertial characteristics of particles in a Lagrangian (particle fixed) frame, and for the hydrodynamics in an independent Eulerian frame. The necessary Eulerian-Lagrangian transformations are simplified under the assumption of an ideal Inertial Measurement Unit (IMU), rigidly attached at the centre of the mass of a sediment particle. Real, commercially available IMU sensors can provide high frequency data on accelerations and angular velocities (hence forces and energy) experienced by grains during entrainment and motion, if adequately customized. IMUs are subjected to significant error accu- mulation but they can be used for statistical parametrisation of an Eulerian-Lagrangian model, for coarse sediment particles and over the temporal scale of individual entrainment events. In this thesis an Eulerian-Lagrangian model is introduced and evaluated experimentally. Absolute inertial accelerations were recorded at a 4 Hz frequency from a spherical instrumented particle (111 mm diameter and 2383 kg/m3 density) in a series of entrainment threshold experiments on a fixed idealised bed. The grain-top inertial acceleration entrainment threshold was approximated at 44 and 51 mg for slopes 0.026 and 0.037 respectively. The saddle inertial acceleration entrainment threshold was at 32 and 25 mg for slopes 0.044 and 0.057 respectively. For the evaluation of the complete Eulerian-Lagrangian model two prototype sensors are presented: an idealised (spherical) with a diameter of 90 mm and an ellipsoidal with axes 100, 70 and 30 mm. Both are instrumented with a complete IMU, capable of sampling 3D inertial accelerations and 3D angular velocities at 50 Hz. After signal analysis, the results can be used to parametrize sediment movement but they do not contain positional information. The two sensors (spherical and ellipsoidal) were tested in a series of entrainment experiments, similar to the evaluation of the 111 mm prototype, for a slope of 0.02. The spherical sensor entrained at discharges of 24.8 ± 1.8 l/s while the same threshold for the ellipsoidal sensor was 45.2 ± 2.2 l/s. Kinetic energy calculations were used to quantify the particle-bed energy exchange under fluvial (discharge at 30 l/s) and non-fluvial conditions. All the experiments suggest that the effect of the inertial characteristics of coarse sediments on their motion is comparable to the effect hydrodynamic forces. The coupling of IMU sensors with advanced telemetric systems can lead to the tracking of Lagrangian particle trajectories, at a frequency and accuracy that will permit the testing of diffusion/dispersion models across the range of particle diameters.
Resumo:
We present a multiscale model bridging length and time scales from molecular to continuum levels with the objective of predicting the yield behavior of amorphous glassy polyethylene (PE). Constitutive pa- rameters are obtained from molecular dynamics (MD) simulations, decreasing the requirement for ad- hoc experiments. Consequently, we achieve: (1) the identification of multisurface yield functions; (2) the high strain rate involved in MD simulations is upscaled to continuum via quasi-static simulations. Validation demonstrates that the entire multisurface yield functions can be scaled to quasi-static rates where the yield stresses are possibly predicted by a proposed scaling law; (3) a hierarchical multiscale model is constructed to predict temperature and strain rate dependent yield strength of the PE.
Resumo:
The amount of data collected from an individual player during a football match has increased significantly in recent years, following technological evolution in positional tracking. However, given the short time that separates competitions, the common analysis of these data focuses on the magnitude of actions of each player, while considering either technical or physical perform- ance. This focus leads to a considerable amount of information not being taken into account in performance optimization, particularly while considering a sequence of different matches of the same team. In this presentation, we will present a tactical performance indicator that considers players’ overall positioning and their level of coordination during the match. This performance indicator will be applied in different time scales, with a particular focus on possible practical applications.
Resumo:
The present thesis focuses on the on-fault slip distribution of large earthquakes in the framework of tsunami hazard assessment and tsunami warning improvement. It is widely known that ruptures on seismic faults are strongly heterogeneous. In the case of tsunamigenic earthquakes, the slip heterogeneity strongly influences the spatial distribution of the largest tsunami effects along the nearest coastlines. Unfortunately, after an earthquake occurs, the so-called finite-fault models (FFM) describing the coseismic on-fault slip pattern becomes available over time scales that are incompatible with early tsunami warning purposes, especially in the near field. Our work aims to characterize the slip heterogeneity in a fast, but still suitable way. Using finite-fault models to build a starting dataset of seismic events, the characteristics of the fault planes are studied with respect to the magnitude. The patterns of the slip distribution on the rupture plane, analysed with a cluster identification algorithm, reveal a preferential single-asperity representation that can be approximated by a two-dimensional Gaussian slip distribution (2D GD). The goodness of the 2D GD model is compared to other distributions used in literature and its ability to represent the slip heterogeneity in the form of the main asperity is proven. The magnitude dependence of the 2D GD parameters is investigated and turns out to be of primary importance from an early warning perspective. The Gaussian model is applied to the 16 September 2015 Illapel, Chile, earthquake and used to compute early tsunami predictions that are satisfactorily compared with the available observations. The fast computation of the 2D GD and its suitability in representing the slip complexity of the seismic source make it a useful tool for the tsunami early warning assessments, especially for what concerns the near field.
Resumo:
This thesis aims to investigate the fundamental processes governing the performance of different types of photoelectrodes used in photoelectrochemical (PEC) applications, such as unbiased water splitting for hydrogen production. Unraveling the transport and recombination phenomena in nanostructured and surface-modified heterojunctions at a semiconductor/electrolyte interface is not trivial. To approach this task, the work presented here first focus on a hydrogen-terminated p-silicon photocathode in acetonitrile, considered as a standard reference for PEC studies. Steady-state and time-resolved excitation at long wavelength provided clear evidence of the formation of an inversion layer and revealed that the most optimal photovoltage and the longest electron-hole pair lifetime occurs when the reduction potential for the species in solution lies within the unfilled conduction band states. Understanding more complex systems is not as straight-forward and a complete characterization that combine time- and frequency-resolved techniques is needed. Intensity modulated photocurrent spectroscopy and transient absorption spectroscopy are used here on WO3/BiVO4 heterojunctions. By selectively probing the two layers of the heterojunction, the occurrence of interfacial recombination was identified. Then, the addition of Co-Fe based overlayers resulted in passivation of surface states and charge storage at the overlayer active sites, providing higher charge separation efficiency and suppression of recombination in time scales that go from picoseconds to seconds. Finally, the charge carrier kinetics of several different Cu(In,Ga)Se2 (CIGS)-based architectures used for water reduction was investigated. The efficiency of a CIGS photocathode is severely limited by charge transfer at the electrode/electrolyte interface compared to the same absorber layer used as a photovoltaic cell. A NiMo binary alloy deposited on the photocathode surface showed a remarkable enhancement in the transfer rate of electrons in solution. An external CIGS photovoltaic module assisting a NiMo dark cathode displayed optimal absorption and charge separation properties and a highly performing interface with the solution.
Resumo:
Ultrafast pump-probe spectroscopy is a conceptually simple and versatile tool for resolving photoinduced dynamics in molecular systems. Due to the fast development of new experimental setups, such as synchrotron light sources and X-ray free electron lasers (XFEL), new spectral windows are becoming accessible. On the one hand, these sources have enabled scientist to access faster and faster time scales and to reach unprecedent insights into dynamical properties of matter. On the other hand, the complementarity of well-developed and novel techniques allows to study the same physical process from different points of views, integrating the advantages and overcoming the limitations of each approach. In this context, it is highly desirable to reach a clear understanding of which type of spectroscopy is more suited to capture a certain facade of a given photo-induced process, that is, to establish a correlation between the process to be unraveled and the technique to be used. In this thesis, I will show how computational spectroscopy can be a tool to establish such a correlation. I will study a specific process, which is the ultrafast energy transfer in the nicotinamide adenine dinucleotide dimer (NADH). This process will be observed in different spectral windows (from UV-VIS to X-rays), accessing the ability of different spectroscopic techniques to unravel the system evolution by means of state-of-the-art theoretical models and methodologies. The comparison of different spectroscopic simulations will demonstrate their complementarity, eventually allowing to identify the type of spectroscopy that is best suited to resolve the ultrafast energy transfer.
Resumo:
Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.
Resumo:
We show that time-dependent couplings may lead to nontrivial scaling properties of the surface fluctuations of the asymptotic regime in nonequilibrium kinetic roughening models. Three typical situations are studied. In the case of a crossover between two different rough regimes, the time-dependent coupling may result in anomalous scaling for scales above the crossover length. In a different setting, for a crossover from a rough to either a flat or damping regime, the time-dependent crossover length may conspire to produce a rough surface, although the most relevant term tends to flatten the surface. In addition, our analysis sheds light into an existing debate in the problem of spontaneous imbibition, where time-dependent couplings naturally arise in theoretical models and experiments.
Resumo:
BACKGROUND AND AIMS: Evidence-based and reliable measures of addictive disorders are needed in general population-based assessments. One study suggested that heavy use over time (UOT) should be used instead of self-reported addiction scales (AS). This study compared UOT and AS regarding video gaming and internet use empirically, using associations with comorbid factors. DESIGN: Cross-sectional data from the 2011 French Survey on Health and Consumption on Call-up and Preparation for Defence-Day (ESCAPAD), cross-sectional data from the 2012 Swiss ado@internet.ch study and two waves of longitudinal data (2010-13) of the Swiss Longitudinal Cohort Study on Substance Use Risk Factors (C-SURF). SETTING: Three representative samples from the general population of French and Swiss adolescents and young Swiss men, aged approximately 17, 14 and 20 years, respectively. PARTICIPANTS: ESCAPAD: n =22 945 (47.4% men); ado@internet.ch: n =3049 (50% men); C-SURF: n =4813 (baseline + follow-up, 100% men). MEASUREMENTS: We assessed video gaming/internet UOT ESCAPAD and ado@internet.ch: number of hours spent online per week, C-SURF: latent score of time spent gaming/using internet] and AS (ESCAPAD: Problematic Internet Use Questionnaire, ado@internet.ch: Internet Addiction Test, C-SURF: Gaming AS). Comorbidities were assessed with health outcomes (ESCAPAD: physical health evaluation with a single item, suicidal thoughts, and appointment with a psychiatrist; ado@internet.ch: WHO-5 and somatic health problems; C-SURF: Short Form 12 (SF-12 Health Survey) and Major Depression Inventory (MDI). FINDINGS: UOT and AS were correlated moderately (ESCAPAD: r = 0.40, ado@internet.ch: r = 0.53 and C-SURF: r = 0.51). Associations of AS with comorbidity factors were higher than those of UOT in cross-sectional (AS: .005 ≤ |b| ≤ 2.500, UOT: 0.001 ≤ |b| ≤ 1.000) and longitudinal analyses (AS: 0.093 ≤ |b| ≤ 1.079, UOT: 0.020 ≤ |b| ≤ 0.329). The results were similar across gender in ESCAPAD and ado@internet.ch (men: AS: 0.006 ≤ |b| ≤ 0.211, UOT: 0.001 ≤ |b| ≤ 0.061; women: AS: 0.004 ≤ |b| ≤ 0.155, UOT: 0.001 ≤ |b| ≤ 0.094). CONCLUSIONS: The measurement of heavy use over time captures part of addictive video gaming/internet use without overlapping to a large extent with the results of measuring by self-reported addiction scales (AS). Measuring addictive video gaming/internet use via self-reported addiction scales relates more strongly to comorbidity factors than heavy use over time.
Resumo:
Mixing layers are present in very different types of physical situations such as atmospheric flows, aerodynamics and combustion. It is, therefore, a well researched subject, but there are aspects that require further studies. Here the instability of two-and three-dimensional perturbations in the compressible mixing layer was investigated by numerical simulations. In the numerical code, the derivatives were discretized using high-order compact finite-difference schemes. A stretching in the normal direction was implemented with both the objective of reducing the sound waves generated by the shear region and improving the resolution near the center. The compact schemes were modified to work with non-uniform grids. Numerical tests started with an analysis of the growth rate in the linear regime to verify the code implementation. Tests were also performed in the non-linear regime and it was possible to reproduce the vortex roll-up and pairing, both in two-and three-dimensional situations. Amplification rate analysis was also performed for the secondary instability of this flow. It was found that, for essentially incompressible flow, maximum growth rates occurred for a spanwise wavelength of approximately 2/3 of the streamwise spacing of the vortices. The result demonstrated the applicability of the theory developed by Pierrehumbet and Widnall. Compressibility effects were then considered and the maximum growth rates obtained for relatively high Mach numbers (typically under 0.8) were also presented.
Long-term clozapine treatment identifies significant improvements in clinical and functioning scales
Resumo:
The majority of clinical drug trials only cover a small number of variables over a short period of time on a small group of people. The objective of this study was to track a large group of people over a long period of time, using a diverse range of variables with a naturalistic design to assess the ‘real world’ use of clozapine. Fifty-three people with treatment-resistant schizophrenia were recruited into a 2-year study which assessed the subjects using the following scales: Positive and Negative Syndrome Scale (PANSS), Clinical Global Impression Scale (CGI), Life Skills Profile (LSP), and Role Functioning Scale (RFS). Discharge, leave, and ward movement rates were also monitored. All subjects were inpatients at a tertiary psychiatric facility. Thirty-three percent of the group was discharged. Seventythree percent moved to less cost-intensive wards, and the leave rate increased by 105”/0. Sixty-seven percent of the study group were identified as responders by the 24-month time point. Twenty-four percent of the group had their CGI scores reduced to 2 or better 0, =O.OOOl). Significant improvements were identified in the RFS (p = 0.02) and LSP (p = 0.0001). Long-term clozapine treatment has identified a significant group of responders on a variety of measures.
Resumo:
Background: There is a paucity of information describing the real-time 3-dimensional echocardiography (RT3DE) and dyssynchrony indexes (DIs) of a normal population. We evaluate the RT3DE DIs in a population with normal electrocardiograms and 2- and 3-dimensional echocardiographic analyses. This information is relevant for cardiac resynchronization therapy. Methods: We evaluated 131 healthy volunteers (73 were male, aged 46 +/- 14 years) who were referred for routine echocardiography; who presented normal cardiac structure on electrocardiography, 2-dimensional echocardiography, and RT3DE; and who had no history of cardiac diseases. We analyzed 3-dimensional left ventricular ejection fraction, left ventricle end-diastolic volume, left ventricle end-systolic volume, and left ventricular systolic DI% (6-, 12-, and 16-segment models). RT3DE data were analyzed by quantifying the statistical distribution (mean, median, standard deviation [SD], relative SD, coefficient of skewness, coefficient of kurtosis, Kolmogorov-Smirnov test, D`Agostino-Pearson test, percentiles, and 95% confidence interval). Results: Left ventricular ejection fraction ranged from 50% to 80% (66.1% +/- 7.1%); left ventricle end-diastolic volume ranged from 39.8 to 145 mL (79.1 +/- 24.9 mL); left ventricle end-systolic volume ranged from 12.9 to 66 mL (27 +/- 12.1 mL); 6-segment DI% ranged from 0.20% to 3.80% (1.21% +/- 0.66%), median: 1.06, relative SD: 0.5482, coefficient of skewness: 1.2620 (P < .0001), coefficient of Kurtosis: 1.9956 (P = .0039); percentile 2.5%: 0.2900, percentile 97.5%: 2.8300; 12-segment DI% ranged from 0.22% to 4.01% (1.29% +/- 0.71%), median: 1.14, relative SD: 0.95, coefficient of skewness: 1.1089 (P < .0001), coefficient of Kurtosis: 1.6372 (P = .0100), percentile 2.5%: 0.2850, percentile 97.5%: 3.0700; and 16-segment DI% ranged from 0.29% to 4.88% (1.59 +/- 0.99), median: 1.39, relative SD: 0.56, coefficient of skewness: 1.0792 (P < .0001), coefficient of Kurtosis: 0.9248 (P = .07), percentile 2.5%: 0.3750, percentile 97.5%: 3.750. Conclusion: This study allows for the quantification of RT3DE DIs in normal subjects, providing a comparison for patients with heart failure who may be candidates for cardiac resynchronization therapy. (J Am Soc Echocardiogr 2008; 21: 1229-1235)
Resumo:
Real time three-dimensional echocardiography (RT3DE) has been demonstrated to be an accurate technique to quantify left ventricular (LV) volumes and function in different patient populations. We sought to determine the value of RT3DE for evaluating patients with hypertrophic cardiomyopathy (HCM), in comparison with cardiac magnetic resonance imaging (MRI). Methods: We studied 20 consecutive patients with HCM who underwent two-dimensional echocardiography (2DE), RT3DE, and MRI. Parameters analyzed by echocardiography and MRI included: wall thickness, LV volumes, ejection fraction (LVEF), mass, geometric index, and dyssynchrony index. Statistical analysis was performed by Lin agreement coefficient, Pearson linear correlation and Bland-Altman model. Results: There was excellent agreement between 2DE and RT3DE (Rc = 0.92), 2DE and MRI (Rc = 0.85), and RT3DE and MRI (Rc = 0.90) for linear measurements. Agreement indexes for LV end-diastolic and end-systolic volumes were Rc = 0.91 and Rc = 0.91 between 2DE and RT3DE, Rc = 0.94 and Rc = 0.95 between RT3DE and MRI, and Rc = 0.89 and Rc = 0.88 between 2DE and MRI, respectively. Satisfactory agreement was observed between 2DE and RT3DE (Rc = 0.75), RT3DE and MRI (Rc = 0.83), and 2DE and MRI (Rc = 0.73) for determining LVEF, with a mild underestimation of LVEF by 2DE, and smaller variability between RT3DE and MRI. Regarding LV mass, excellent agreement was observed between RT3DE and MRI (Rc = 0.96), with bias of -6.3 g (limits of concordance = 42.22 to -54.73 g). Conclusion: In patients with HCM, RT3DE demonstrated superior performance than 2DE for the evaluation of myocardial hypertrophy, LV volumes, LVEF, and LV mass.