924 resultados para time domain analysis
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade UnB Gama, Faculdade de Tecnologia, Programa de Pós-graduação em Integridade de Materiais da Engenharia, 2016.
Resumo:
Temporally-growing frontal meandering and occasional eddy-shedding is observed in the Brazil Current (BC) as it flows adjacent to the Brazilian Coast. No study of the dynamics of this phenomenon has been conducted to date in the region between 22 degrees S and 25 degrees S. Within this latitude range, the flow over the intermediate continental slope is marked by a current inversion at a depth that is associated with the Intermediate Western Boundary Current (IWBC). A time series analysis of 10-current-meter mooring data was used to describe a mean vertical profile for the BC-IWBC jet and a typical meander vertical structure. The latter was obtained by an empirical orthogonal function (EOF) analysis that showed a single mode explaining 82% of the total variance. This mode structure decayed sharply with depth, revealing that the meandering is much more vigorous within the BC domain than it is in the IWBC region. As the spectral analysis of the mode amplitude time series revealed no significant periods, we searched for dominant wavelengths. This search was done via a spatial EOF analysis on 51 thermal front patterns derived from digitized AVHRR images. Four modes were statistically significant at the 95% confidence level. Modes 3 and 4, which together explained 18% of the total variance, are associated with 266 and 338-km vorticity waves, respectively. With this new information derived from the data, the [Johns, W.E., 1988. One-dimensional baroclinically unstable waves on the Gulf Stream potential vorticity gradient near Cape Hatteras. Dyn. Atmos. Oceans 11, 323-350] one-dimensional quasi-geostrophic model was applied to the interpolated mean BC-IWBC jet. The results indicated that the BC system is indeed baroclinically unstable and that the wavelengths depicted in the thermal front analysis are associated with the most unstable waves produced by the model. Growth rates were about 0.06 (0.05) days(-1) for the 266-km (338-km) wave. Moreover, phase speeds for these waves were low compared to the surface BC velocity and may account for remarks in the literature about growing standing or stationary meanders off southeast Brazil. The theoretical vertical structure modes associated with these waves resembled very closely to the one obtained for the current-meter mooring EOF analysis. We interpret this agreement as a confirmation that baroclinic instability is an important mechanism in meander growth in the BC system. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Com a presente evolução das railguns na Marinha dos Estados Unidos da América e possível instalação em seus navios num futuro muito próximo, outras marinhas se seguirão. Creio que será do interesse da Marinha Portuguesa acompanhar esta evolução tecnológica, considerando as vantagens que advém da adoção deste tipo de armamento. Neste documento são abordados os princípios básicos subjacentes ao funcionamento da railgun, com principal foco nas questões eletrodinâmicas. Pretende-se adquirir familiaridade com este novo tipo de armamento através do estudo crítico dos seus princípios de funcionamento. O princípio básico de funcionamento de uma railgun, à primeira vista, parece bastante simples, à luz da aplicação imediata da expressão da força de Lorentz sobre um condutor percorrido por corrente elétrica. No entanto, tudo se torna mais complicado no caso de uma variação rápida dos parâmetros envolvidos (regime transitório), que exige uma análise mais aprofundada do comportamento da corrente, campos elétrico e magnético, e todos os materiais envolvidos neste sistema. Este trabalho envolveu ainda a construção de duas railguns, uma primeira de dimensões mais pequenas para ganhar familiaridade com o sistema, e uma última de dimensões de laboratório na qual foram feitos vários disparos para testar diferentes tipos de material e dimensões de projétil. Em suma, é demonstrado neste documento uma análise, no domínio do tempo, da distribuição espacial do campo eletromagnético, corrente elétrica e consequente fluxo de energia, complementados por uma parte experimental.
Resumo:
In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry’s standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device. This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.
Resumo:
Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.
Resumo:
It is well known that self-generated stimuli are processed differently from externally generated stimuli. For example, many people have noticed since childhood that it is very difficult to make a self-tickling. In the auditory domain, self-generated sounds elicit smaller brain responses as compared to externally generated sounds, known as the sensory attenuation (SA) effect. SA is manifested in reduced amplitudes of evoked responses as measured through MEEG, decreased firing rates of neurons and a lower level of perceived loudness for self-generated sounds. The predominant explanation for SA is based on the idea that self-generated stimuli are predicted (e.g., the forward model account). It is the nature of their predictability that is crucial for SA. On the contrary, the sensory gating account emphasizes a general suppressive effect of actions on sensory processing, regardless of the predictability of the stimuli. Both accounts have received empirical support, which suggests that both mechanisms may exist. In chapter 2, three behavioural studies concerning the influence of motor activation on auditory perception were presented. Study 1 compared the effect of SA and attention in an auditory detection task and showed that SA was present even when substantial attention was paid to unpredictable stimuli. Study 2 compared the loudness perception of tones generated by others between Chinese and British participants. Compared to externally generated tones, a decrease in perceived loudness for others generated tones was found among Chinese but not among the British. In study 3, partial evidence was found that even when reading words that are related to action, auditory detection performance was impaired. In chapter 3, the classic SA effect of M100 suppression was replicated with MEG in study 4. With time-frequency analysis, a potential neural information processing sequence was found in auditory cortex. Prior to the onset of self-generated tones, there was an increase of oscillatory power in the alpha band. After the stimulus onset, reduced gamma power and alpha/beta phase locking were found. The three temporally segregated oscillatory events correlated with each other and with SA effect, which may be the underlying neural implementation of SA. In chapter 4, a TMS-MEG study was presented investigating the role of the cerebellum in adapting to delayed presentation of self-generated tones (study 5). It demonstrated that in sham stimulation condition, the brain can adapt to the delay (about 100 ms) within 300 trials of learning by showing a significant increase of SA effect in the suppression of M100, but not M200 component. Whereas after stimulating the cerebellum with a suppressive TMS protocol, the adaptation in M100 suppression disappeared and the pattern of M200 suppression reversed to M200 enhancement. These data support the idea that the suppressive effect of actions on auditory processing is a consequence of both motor driven sensory predictions and general sensory gating. The results also demonstrate the importance of neural oscillations in implementing SA effect and the critical role of the cerebellum in learning sensory predictions under sensory perturbation.
Resumo:
Understanding what characterizes patients who suffer great delays in diagnosis of pulmonary tuberculosis is of great importance when establishing screening strategies to better control TB. Greater delays in diagnosis imply a higher chance for susceptible individuals to become infected by a bacilliferous patient. A Structured Additive Regression model is attempted in this study in order to potentially contribute to a better characterization of bacilliferous prevalence in Portugal. The main findings suggest the existence of significant regional differences in Portugal, with the fact of being female and/or alcohol dependent contributing to an increased delay-time in diagnosis, while being dependent on intravenous drugs and/or being diagnosed with HIV are factors that increase the chance of an earlier diagnosis of pulmonary TB. A decrease in 2010 to 77% on treatment success in Portugal underlines the importance of conducting more research aimed at better TB control strategies.
Resumo:
Time Series Analysis of multispectral satellite data offers an innovative way to extract valuable information of our changing planet. This is now a real option for scientists thanks to data availability as well as innovative cloud-computing platforms, such as Google Earth Engine. The integration of different missions would mitigate known issues in multispectral time series construction, such as gaps due to clouds or other atmospheric effects. With this purpose, harmonization among Landsat-like missions is possible through statistical analysis. This research offers an overview of the different instruments from Landsat and Sentinel missions (TM, ETM, OLI, OLI-2 and MSI sensors) and products levels (Collection-2 Level-1 and Surface Reflectance for Landsat and Level-1C and Level-2A for Sentinel-2). Moreover, a cross-sensors comparison was performed to assess the interoperability of the sensors on-board Landsat and Sentinel-2 constellations, having in mind a possible combined use for time series analysis. Firstly, more than 20,000 pairs of images almost simultaneously acquired all over Europe were selected over a period of several years. The study performed a cross-comparison analysis on these data, and provided an assessment of the calibration coefficients that can be used to minimize differences in the combined use. Four of the most popular vegetation indexes were selected for the study: NDVI, EVI, SAVI and NDMI. As a result, it is possible to reconstruct a longer and denser harmonized time series since 1984, useful for vegetation monitoring purposes. Secondly, the spectral characteristics of the recent Landsat-9 mission were assessed for a combined use with Landsat-8 and Sentinel-2. A cross-sensor analysis of common bands of more than 3,000 almost simultaneous acquisitions verified a high consistency between datasets. The most relevant discrepancy has been observed in the blue and SWIRS bands, often used in vegetation and water related studies. This analysis was supported with spectroradiometer ground measurements.
Resumo:
The discovery of the neutrino mass is a direct evidence of new physics. Several questions arise from this observation, regarding the mechanism originating the neutrino masses and their hierarchy, the violation of lepton number conservation and the generation of the baryon asymmetry. These questions can be addressed by the experimental search for neutrinoless double beta (0\nu\beta\beta) decay, a nuclear decay consisting of two simultaneous beta emissions without the emission of two antineutrinos. 0\nu\beta\beta decay is possible only if neutrinos are identical to antineutrinos, namely if they are Majorana particles. Several experiments are searching for 0\nu\beta\beta decay. Among these, CUORE is employing 130Te embedded in TeO_2 bolometric crystals. It needs to have an accurate understanding of the background contribution in the energy region around the Q-value of 130Te. One of the main contributions is given by particles from the decay chains of contaminating nuclei (232Th, 235-238U) present in the active crystals or in the support structure. This thesis uses the 1 ton yr CUORE data to study these contamination by looking for events belonging to sub-chains of the Th and U decay chains and reconstructing their energy and time difference distributions in a delayed coincidence analysis. These results in combination with studies on the simulated data are then used to evaluate the contaminations. This is the first time this analysis is applied to the CUORE data and this thesis highlights the feasibility of it while providing a starting point for further studies. A part of the obtained results agrees with ones from previous analysis, demonstrating that delayed coincidence searches might improve the understanding of the CUORE experiment background. This kind of delayed coincidence analysis can also be reused in the future once the, CUORE upgrade, CUPID data will be ready to be analyzed, with the aim of improving the sensitivity to the 0\nu\beta\beta decay of 100Mo.
Resumo:
Abstract Objective. The aim of this study was to evaluate the alteration of human enamel bleached with high concentrations of hydrogen peroxide associated with different activators. Materials and methods. Fifty enamel/dentin blocks (4 × 4 mm) were obtained from human third molars and randomized divided according to the bleaching procedure (n = 10): G1 = 35% hydrogen peroxide (HP - Whiteness HP Maxx); G2 = HP + Halogen lamp (HL); G3 = HP + 7% sodium bicarbonate (SB); G4 = HP + 20% sodium hydroxide (SH); and G5 = 38% hydrogen peroxide (OXB - Opalescence Xtra Boost). The bleaching treatments were performed in three sessions with a 7-day interval between them. The enamel content, before (baseline) and after bleaching, was determined using an FT-Raman spectrometer and was based on the concentration of phosphate, carbonate, and organic matrix. Statistical analysis was performed using two-way ANOVA for repeated measures and Tukey's test. Results. The results showed no significant differences between time of analysis (p = 0.5175) for most treatments and peak areas analyzed; and among bleaching treatments (p = 0.4184). The comparisons during and after bleaching revealed a significant difference in the HP group for the peak areas of carbonate and organic matrix, and for the organic matrix in OXB and HP+SH groups. Tukey's analysis determined that the difference, peak areas, and the interaction among treatment, time and peak was statistically significant (p < 0.05). Conclusion. The association of activators with hydrogen peroxide was effective in the alteration of enamel, mainly with regards to the organic matrix.
Resumo:
Time Domain Reflectometry (TDR) is a reliable method for in-situ measurements of the humidity and the solution concentration at the same soil volume. Accurate interpretation of electrical conductivity (and soil humidity) measurements may require a specific calibration curve. The primary goal of this work was to establish a calibration procedure for using TDR to estimate potassium nitrate concentrations (KNO3) in soil solution. An equation relating the electrical conductivity measured by TDR and KNO3 concentration was established enabling the use of TDR technique to estimate soil water content and nitrate concentration for efficient fertigation management.