857 resultados para Environments with time-varying ocean currents


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this paper is to estimate time-varying covariance matrices.Since the covariance matrix of financial returns is known to changethrough time and is an essential ingredient in risk measurement, portfolioselection, and tests of asset pricing models, this is a very importantproblem in practice. Our model of choice is the Diagonal-Vech version ofthe Multivariate GARCH(1,1) model. The problem is that the estimation ofthe general Diagonal-Vech model model is numerically infeasible indimensions higher than 5. The common approach is to estimate more restrictive models which are tractable but may not conform to the data. Our contributionis to propose an alternative estimation method that is numerically feasible,produces positive semi-definite conditional covariance matrices, and doesnot impose unrealistic a priori restrictions. We provide an empiricalapplication in the context of international stock markets, comparing thenew estimator to a number of existing ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most studies of invasive species have been in highly modified, lowland environments, with comparatively little attention directed to less disturbed, high-elevation environments. However, increasing evidence indicates that plant invasions do occur in these environments, which often have high conservation value and provide important ecosystem services. Over a thousand non-native species have become established in natural areas at high elevations worldwide, and although many of these are not invasive, some may pose a considerable threat to native mountain ecosystems. Here, we discuss four main drivers that shape plant invasions into high-elevation habitats: (1) the (pre-)adaptation of non-native species to abiotic conditions, (2) natural and anthropogenic disturbances, (3) biotic resistance of the established communities, and (4) propagule pressure. We propose a comprehensive research agenda for tackling the problem of plant invasions into mountain ecosystems, including documentation of mountain invasion patterns at multiple scales, experimental studies, and an assessment of the impacts of non-native species in these systems. The threat posed to high-elevation biodiversity by invasive plant species is likely to increase because of globalization and climate change. However, the higher mountains harbor ecosystems where invasion by non-native species has scarcely begun, and where science and management have the opportunity to respond in time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To assess baseline predictors and consequences of medication non-adherence in the treatment of pediatric patients with attention-deficit/hyperactivity disorder (ADHD) from Central Europe and East Asia. PATIENTS AND METHODS: Data for this post-hoc analysis were taken from a 1-year prospective, observational study that included a total of 1,068 newly-diagnosed pediatric patients with ADHD symptoms from Central Europe and East Asia. Medication adherence during the week prior to each visit was assessed by treating physicians using a 5-point Likert scale, and then dichotomized into either adherent or non-adherent. Clinical severity was measured by the Clinical Global Impressions-ADHD-Severity (CGI-ADHD) scale and the Child Symptom Inventory-4 (CSI-4) Checklist. Health-Related Quality of Life (HRQoL) was measured using the Child Health and Illness Profile-Child Edition (CHIP-CE). Regression analyses were used to assess baseline predictors of overall adherence during follow-up, and the impact of time-varying adherence on subsequent outcomes: response (defined as a decrease of at least 1 point in CGI), changes in CGI-ADHD, CSI-4, and the five dimensions of CHIP-CE. RESULTS: Of the 860 patients analyzed, 64.5% (71.6% in Central Europe and 55.5% in East Asia) were rated as adherent and 35.5% as non-adherent during follow-up. Being from East Asia was found to be a strong predictor of non-adherence. In East Asia, a family history of ADHD and parental emotional distress were associated with non-adherence, while having no other children living at home was associated with non-adherence in Central Europe as well as in the overall sample. Non-adherence was associated with poorer response and less improvement on CGI-ADHD and CSI-4, but not on CHIP-CE. CONCLUSION: Non-adherence to medication is common in the treatment of ADHD, particularly in East Asia. Non-adherence was associated with poorer response and less improvement in clinical severity. A limitation of this study is that medication adherence was assessed by the treating clinician using a single item question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Being the commonest ocular disorder, dense cataracts disable fundoscopic examination and the diagnosis of retinal disorders, which dogs may be predisposed. The aim of this study was to compare the electroretinographic responses recorded according to the International Society for Clinical Electrophysiology of Vision human protocol to evaluate retinal function of diabetic and non diabetic dogs, both presenting mature or hypermature cataracts. Full-field electroretinogram was recorded from 66 dogs, with ages varying from 6 to 15 years old allocated into two groups: (1) CG, non diabetic cataractous dogs, and (2) DG, diabetic cataractous dogs. Mean peak-to-peak amplitude (microvolts) and b-wave implicit time (milliseconds) were determined for each of the five standard full-field ERG responses (rod response, maximal response, oscillatory potentials, single-flash cone response and 30 Hz flicker). Comparing CG to DG, ERGs recorded from diabetic dogs presented lower amplitude and prolonged b-wave implicit time in all ERG responses. Prolonged b-wave implicit time was statistically significant (p< 0.05) at 30 Hz flicker (24.0 ms versus 22.4 ms). These data suggests full-field ERG is capable to record sensible alterations, such as flicker's implicit time, being useful to investigate retinal dysfunction in diabetic dogs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first minutes of the time course of cardiopulmonary reflex control evoked by lower body negative pressure (LBNP) in patients with hypertensive cardiomyopathy have not been investigated in detail. We studied 15 hypertensive patients with left ventricular dysfunction (LVD) and 15 matched normal controls to observe the time course response of the forearm vascular resistance (FVR) during 3 min of LBNP at -10, -15, and -40 mmHg in unloading the cardiopulmonary receptors. Analysis of the average of 3-min intervals of FVR showed a blunted response of the LVD patients at -10 mmHg (P = 0.03), but a similar response in both groups at -15 and -40 mmHg. However, using a minute-to-minute analysis of the FVR at -15 and -40 mmHg, we observed a similar response in both groups at the 1st min, but a marked decrease of FVR in the LVD group at the 3rd min of LBNP at -15 mmHg (P = 0.017), and -40 mmHg (P = 0.004). Plasma norepinephrine levels were analyzed as another neurohumoral measurement of cardiopulmonary receptor response to LBNP, and showed a blunted response in the LVD group at -10 (P = 0.013), -15 (P = 0.032) and -40 mmHg (P = 0.004). We concluded that the cardiopulmonary reflex response in patients with hypertensive cardiomyopathy is blunted at lower levels of LBNP. However, at higher levels, the cardiopulmonary reflex has a normal initial response that decreases progressively with time. As a consequence of the time-dependent response, the cardiopulmonary reflex response should be measured over small intervals of time in clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several methods have been described to measure intraocular pressure (IOP) in clinical and research situations. However, the measurement of time varying IOP with high accuracy, mainly in situations that alter corneal properties, has not been reported until now. The present report describes a computerized system capable of recording the transitory variability of IOP, which is sufficiently sensitive to reliably measure ocular pulse peak-to-peak values. We also describe its characteristics and discuss its applicability to research and clinical studies. The device consists of a pressure transducer, a signal conditioning unit and an analog-to-digital converter coupled to a video acquisition board. A modified Cairns trabeculectomy was performed in 9 Oryctolagus cuniculus rabbits to obtain changes in IOP decay parameters and to evaluate the utility and sensitivity of the recording system. The device was effective for the study of kinetic parameters of IOP, such as decay pattern and ocular pulse waves due to cardiac and respiratory cycle rhythm. In addition, there was a significant increase of IOP versus time curve derivative when pre- and post-trabeculectomy recordings were compared. The present procedure excludes corneal thickness and error related to individual operator ability. Clinical complications due to saline infusion and pressure overload were not observed during biomicroscopic evaluation. Among the disadvantages of the procedure are the requirement of anesthesia and the use in acute recordings rather than chronic protocols. Finally, the method described may provide a reliable alternative for the study of ocular pressure dynamic alterations in man and may facilitate the investigation of the pathogenesis of glaucoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Funding support for this doctoral thesis has been provided by the Canadian Institutes of Health Research-Public Health Agency of Canada, QICSS matching grant, and la Faculté des études supérieures et postdoctorales-Université de Montréal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated diurnal nitrate (NO3-) concentration variability in the San Joaquin River using an in situ optical NO3- sensor and discrete sampling during a 5-day summer period characterized by high algal productivity. Dual NO3- isotopes (delta N-15(NO3) and delta O-18(NO3)) and dissolved oxygen isotopes (delta O-18(DO)) were measured over 2 days to assess NO3- sources and biogeochemical controls over diurnal time-scales. Concerted temporal patterns of dissolved oxygen (DO) concentrations and delta O-18(DO) were consistent with photosynthesis, respiration and atmospheric O-2 exchange, providing evidence of diurnal biological processes independent of river discharge. Surface water NO3- concentrations varied by up to 22% over a single diurnal cycle and up to 31% over the 5-day study, but did not reveal concerted diurnal patterns at a frequency comparable to DO concentrations. The decoupling of delta N-15(NO3) and delta O-18(NO3) isotopes suggests that algal assimilation and denitrification are not major processes controlling diurnal NO3- variability in the San Joaquin River during the study. The lack of a clear explanation for NO3- variability likely reflects a combination of riverine biological processes and time-varying physical transport of NO3- from upstream agricultural drains to the mainstem San Joaquin River. The application of an in situ optical NO3- sensor along with discrete samples provides a view into the fine temporal structure of hydrochemical data and may allow for greater accuracy in pollution assessment.