914 resultados para Application of Data-driven Modelling in Water Sciences
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.
Resumo:
BACKGROUND: For free-breathing cardiovascular magnetic resonance (CMR), the self-navigation technique recently emerged, which is expected to deliver high-quality data with a high success rate. The purpose of this study was to test the hypothesis that self-navigated 3D-CMR enables the reliable assessment of cardiovascular anatomy in patients with congenital heart disease (CHD) and to define factors that affect image quality. METHODS: CHD patients ≥2 years-old and referred for CMR for initial assessment or for a follow-up study were included to undergo a free-breathing self-navigated 3D CMR at 1.5T. Performance criteria were: correct description of cardiac segmental anatomy, overall image quality, coronary artery visibility, and reproducibility of great vessels diameter measurements. Factors associated with insufficient image quality were identified using multivariate logistic regression. RESULTS: Self-navigated CMR was performed in 105 patients (55% male, 23 ± 12y). Correct segmental description was achieved in 93% and 96% for observer 1 and 2, respectively. Diagnostic quality was obtained in 90% of examinations, and it increased to 94% if contrast-enhanced. Left anterior descending, circumflex, and right coronary arteries were visualized in 93%, 87% and 98%, respectively. Younger age, higher heart rate, lower ejection fraction, and lack of contrast medium were independently associated with reduced image quality. However, a similar rate of diagnostic image quality was obtained in children and adults. CONCLUSION: In patients with CHD, self-navigated free-breathing CMR provides high-resolution 3D visualization of the heart and great vessels with excellent robustness.
Resumo:
L’objectiu d’aquest estudi és examinar si l’Ensenyament Comunicatiu de la Llengua és usat per les mestres quan duen a terme classes d’anglès com a llengua estrangera en una escola de primària polonesa, amb un context educatiu especial. Aquesta recerca qualitativa analitza cinc conjunts principals d’informació relacionats amb l’ensenyament de la llengua estrangera que caracteritzen l’enfocament de l’Ensenyament Comunicatiu de la Llengua i que estan centrats en: el coneixement i les creences dels mestres sobre l’enfocament de l’Ensenyament Comunicatiu de la Llengua; l’ús de la llengua; els aspectes de la llengua anglesa que s’ensenyen; les característiques de les activitats: i el procés d’ensenyament. Per aquest objectiu, s’ha proporcionat un qüestionari a cada mestra d’anglès i s’han portat a terme algunes observacions d’aula per tal de recollir dades, analitzar-les i extreure’n unes conclusions. Els resultats de la investigació mostren que les mestres d’anglès donen bastant suport a l’enfocament de l’Ensenyament Comunicatiu de la Llengua, així com apliquen força sovint els seus trets característics quan porten a terme les classes.
Resumo:
We investigated convection caused by surface cooling and mixing attributable to wind shear stress and their roles as agents for the transport of phytoplankton cells in the water column by carrying out two daily surveys during the stratified period of the Sau reservoir. Green algae, diatoms, and cryptophyceae were the dominant phytoplankton communities during the surveys carried out in the middle (July) and end (September) of the stratified period. We show that a system with a linear stratification and that is subject to weak surface forcing, with weak winds , < 4 m S (-1) and low energy dissipation rate values of the order of 1028 m2 s23 or lower, enables the formation of thin phytoplankton layers. These layers quickly disappear when water parcels mix because there is a medium external forcing (convection) induced by the night surface cooling, which is characterized by energy dissipation rates on the order of , 5x10(-8)m2s(-3). During both surveys the wind generated internal waves during the entire diurnal cycle. During the day, and because of the weak winds, phytoplankton layers rise in the water column up to a depth determined by both solar heating and internal waves. In contrast, during the night phytoplankton mixes down to a depth determined by both convection and internal waves. These internal waves, together with the wind-driven current generated at the surface, seem to be the agents responsible for the horizontal transport of phytoplankton across the reservoir.
Resumo:
Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.
Resumo:
Mixed methods research involves the combined use of quantitative and qualitative methods in the same research study, and it is becoming increasingly important in several scientific areas. The aim of this paper is to review and compare through a mixed methods multiple-case study the application of this methodology in three reputable behavioural science journals: the Journal of Organizational Behavior, Addictive Behaviors and Psicothema. A quantitative analysis was carried out to review all the papers published in these journals during the period 2003-2008 and classify them into two blocks: theoretical and empirical, with the latter being further subdivided into three subtypes (quantitative, qualitative and mixed). A qualitative analysis determined the main characteristics of the mixed methods studies identified, in order to describe in more detail the ways in which the two methods are combined based on their purpose, priority, implementation and research design. From the journals selected, a total of 1.958 articles were analysed, the majority of which corresponded to empirical studies, with only a small number referring to research that used mixed methods. Nonetheless, mixed methods research does appear in all the behavioural science journals studied within the period selected, showing a range of designs, where the sequential equal weight mixed methods research design seems to stand out.
Resumo:
This project addresses methodological and technological challenges in the development of multi-modal data acquisition and analysis methods for the representation of instrumental playing technique in music performance through auditory-motor patterning models. The case study is violin playing: a multi-modal database of violin performances has been constructed by recording different musicians while playing short exercises on different violins. The exercise set and recording protocol have been designed to sample the space defined by dynamics (from piano to forte) and tone (from sul tasto to sul ponticello), for each bow stroke type being played on each of the four strings (three different pitches per string) at two different tempi. The data, containing audio, video, and motion capture streams, has been processed and segmented to facilitate upcoming analyses. From the acquired motion data, the positions of the instrument string ends and the bow hair ribbon ends are tracked and processed to obtain a number of bowing descriptors suited for a detailed description and analysis of the bow motion patterns taking place during performance. Likewise, a number of sound perceptual attributes are computed from the audio streams. Besides the methodology and the implementation of a number of data acquisition tools, this project introduces preliminary results from analyzing bowing technique on a multi-modal violin performance database that is unique in its class. A further contribution of this project is the data itself, which will be made available to the scientific community through the repovizz platform.
Resumo:
Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.
Resumo:
The objective of this work was to develop and validate a mathematical model to estimate the duration of cotton (Gossypium hirsutum L. r. latifolium hutch) cycle in the State of Goiás, Brazil, by applying the method of growing degree-days (GD), and considering, simultaneously, its time-space variation. The model was developed as a linear combination of elevation, latitude, longitude, and Fourier series of time variation. The model parameters were adjusted by using multiple-linear regression to the observed GD accumulated with air temperature in the range of 15°C to 40°C. The minimum and maximum temperature records used to calculate the GD were obtained from 21 meteorological stations, considering data varying from 8 to 20 years of observation. The coefficient of determination, resulting from the comparison between the estimated and calculated GD along the year was 0.84. Model validation was done by comparing estimated and measured crop cycle in the period from cotton germination to the stage when 90 percent of bolls were opened in commercial crop fields. Comparative results showed that the model performed very well, as indicated by the Pearson correlation coefficient of 0.90 and Willmott agreement index of 0.94, resulting in a performance index of 0.85.
Resumo:
The objective of this study was to evaluate the effects of the application of different water depths and nitrogen and potassium doses in the quality of Tanzania grass, in the southern of the state of Tocantins. The experiment was conducted on strips of traditional sprinklers, and used, as treatments, a mixture of fertilizer combinations of N and K2O always in the ratio of 1 N:0.8 K2O. This study determined throughout the experiment: plant height (PH), the crude protein (CP) and neutral detergent fiber (NDF). The highest plant height obtained was 132.4 cm, with a fertilizer dose of 691.71 kg ha-1 in the proportion of N:0.8 K2O, in other words, 384.28 kg ha-1 of N and 307.43 kg ha-1 of K2O, and water depth of 80% of the ETc. The highest crude protein content was 12.2%, with the fertilizer dose application of 700 kg ha-1 yr-1 in the proportion of 1 N to 0.8 of K2O, in other words, 388.89 kg ha-1 of N and 311.11 kg ha-1 of K2O and absence of irrigation. The lowest level of neutral detergent fiber was 60.7% with the application of the smallest dose of fertilizer and highest water depth. It was concluded in this study that there was an increase in plant height by increasing the fertilizer dose and water depth. The crude protein content increased 5.4% in the dry season, by increasing the fertilizer dose and water depth. In the dry season, there was an increase of NDF content by 4.5% by increasing the application of fertilizer and water depth.
Resumo:
The objective of this study consisted on mapping the use and soil occupation and evaluation of the quality of irrigation water used in Salto do Lontra, in the state of Paraná, Brazil. Images of the satellite SPOT-5 were used to perform the supervised classification of the Maximum Likelihood algorithm - MAXVER, and the water quality parameters analyzed were pH, EC, HCO3-, Cl-, PO4(3-), NO3-, turbidity, temperature and thermotolerant coliforms in two distinct rainfall periods. The water quality data were subjected to statistical analysis by the techniques of PCA and FA, to identify the most relevant variables in assessing the quality of irrigation water. The characterization of soil use and occupation by the classifier MAXVER allowed the identification of the following classes: crops, bare soil/stubble, forests and urban area. The PCA technique applied to irrigation water quality data explained 53.27% of the variation in water quality among the sampled points. Nitrate, thermotolerant coliforms, temperature, electrical conductivity and bicarbonate were the parameters that best explained the spatial variation of water quality.
Resumo:
Agroindustries are major consumers of water. However, to adapt to environmental trends and be competitive in the market, they have sought rational use of water through water management in their activities. Cleaner Production can result in economic, environmental and social benefits, and in actions that promote reduction in water consumption. This case study was conducted in a slaughterhouse and poultry cold storage processing plant and aimed to identify points of excessive water consumption, and to propose alternatives for managing water resources by reducing consumption. Consumption data are presented in relation to the processing stages with alternatives proposed for the rational use of water, such as closure of mains water during shift changes. Following the implementation of recommendations, a reduction in water consumption of approximately 11,137 m³ per month was obtained, which equates to a savings of US$ 99,672 per year. From this study, it was concluded that the company under review could develop various improvement actions and make an important contribution to the preservation of water resources in the region where it operates.