971 resultados para Mathematical modeling


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The mathematical modeling in the simulation of self-purification capacity in lotic environment is an important tool in the planning and management of hydric resources in hydrographic basin scale. It satisfactorily deals with the self-purification process when the coefficients of physical and biochemical processes are calibrated from monitorated water quality data, which was the main focus of this study. The present study was conducted to simulate the behavior of the parameters OD, BOD5, total phosphorus, E. coli, ammonia, nitrite, nitrate and the total metals cadmium, chromium, copper, lead and zinc in the Uberabinha’s lower course (with an approximate annual growth flow between 4-35 m3/s), in a stretch of 19 km downstream of the treated effluent release by the WWTP of the city. The modelings, on the present study, show the importance of constant water quality parameters monitoration over the water course, based on the comparison of the simulations from calibrated coefficients and coefficients obtained in the literature for the period of June until November 2015. After coefficients calibration, there were good adjustments between simulated and measured data for the parameters OD, BOD, Ptotal, ammonia and nitrate and unsatisfactory adjust for the parameters nitrite and E. coli. About the total metals, the adjustments were not satisfactory on the reservoir’s vicinity of the Small Hydropower Plant Martins, due the considerable increase of the bottom sediment in lentic region. The greatest scientific contribution of this study was to calibrate the decay coefficient K and the quantification of the release by the fund S of total metals in watercourse midsize WWTP pollutant load receptor, justified by the lack of studies in the literature about the subject. For the metals cadmium, chromium, copper, lead and zinc, the borderline for K and S calibrated were: 0.0 to 13.0 day-1 and 0.0 to 1.7 g/m3.day; 0.0 to 0.9 day-1 and 0.0 to 7.3 g/m3.day; 0.0 to 25.0 day-1 and 0.0 to 1.8 g/m3.day; 0.0 to 7.0 day-1 and 0.0 to 40.3 g/m3.day; 0.0 to 30.0 day-1 and 0.0 to 70.1 g/m3.day.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dengue is an important vector-borne virus that infects on the order of 400 million individuals per year. Infection with one of the virus's four serotypes (denoted DENV-1 to 4) may be silent, result in symptomatic dengue 'breakbone' fever, or develop into the more severe dengue hemorrhagic fever/dengue shock syndrome (DHF/DSS). Extensive research has therefore focused on identifying factors that influence dengue infection outcomes. It has been well-documented through epidemiological studies that DHF is most likely to result from a secondary heterologous infection, and that individuals experiencing a DENV-2 or DENV-3 infection typically are more likely to present with more severe dengue disease than those individuals experiencing a DENV-1 or DENV-4 infection. However, a mechanistic understanding of how these risk factors affect disease outcomes, and further, how the virus's ability to evolve these mechanisms will affect disease severity patterns over time, is lacking. In the second chapter of my dissertation, I formulate mechanistic mathematical models of primary and secondary dengue infections that describe how the dengue virus interacts with the immune response and the results of this interaction on the risk of developing severe dengue disease. I show that only the innate immune response is needed to reproduce characteristic features of a primary infection whereas the adaptive immune response is needed to reproduce characteristic features of a secondary dengue infection. I then add to these models a quantitative measure of disease severity that assumes immunopathology, and analyze the effectiveness of virological indicators of disease severity. In the third chapter of my dissertation, I then statistically fit these mathematical models to viral load data of dengue patients to understand the mechanisms that drive variation in viral load. I specifically consider the roles that immune status, clinical disease manifestation, and serotype may play in explaining viral load variation observed across the patients. With this analysis, I show that there is statistical support for the theory of antibody dependent enhancement in the development of severe disease in secondary dengue infections and that there is statistical support for serotype-specific differences in viral infectivity rates, with infectivity rates of DENV-2 and DENV-3 exceeding those of DENV-1. In the fourth chapter of my dissertation, I integrate these within-host models with a vector-borne epidemiological model to understand the potential for virulence evolution in dengue. Critically, I show that dengue is expected to evolve towards intermediate virulence, and that the optimal virulence of the virus depends strongly on the number of serotypes that co-circulate. Together, these dissertation chapters show that dengue viral load dynamics provide insight into the within-host mechanisms driving differences in dengue disease patterns and that these mechanisms have important implications for dengue virulence evolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MACHADO, Antônio V. et al. Estudio del Secado de Anacardo (Anacardium occidentale L.) mediante Secador Solar de Radiación Directa. Información Tecnológica, v. 21, n. 1, p. 31-37, 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a study of the mathematical properties of voice as an audio signal -- This work includes signals in which the channel conditions are not ideal for emotion recognition -- Multiresolution analysis- discrete wavelet transform – was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states -- ANNs proved to be a system that allows an appropriate classification of such states -- This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features -- Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a novel analysis alternative, based on two Fourier Transforms for emotion recognition from speech -- Fourier analysis allows for display and synthesizes different signals, in terms of power spectral density distributions -- A spectrogram of the voice signal is obtained performing a short time Fourier Transform with Gaussian windows, this spectrogram portraits frequency related features, such as vocal tract resonances and quasi-periodic excitations during voiced sounds -- Emotions induce such characteristics in speech, which become apparent in spectrogram time-frequency distributions -- Later, the signal time-frequency representation from spectrogram is considered an image, and processed through a 2-dimensional Fourier Transform in order to perform the spatial Fourier analysis from it -- Finally features related with emotions in voiced speech are extracted and presented

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MACHADO, Antônio V. et al. Estudio del Secado de Anacardo (Anacardium occidentale L.) mediante Secador Solar de Radiación Directa. Información Tecnológica, v. 21, n. 1, p. 31-37, 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A produção de proteínas através de microrganismos tornou-se uma técnica muito importante na obtenção de compostos de interesse da indústria farmacêutica e alimentícia. Extratos brutos nos quais as proteínas são obtidas são geralmente complexos, contendo sólidos e células em suspensão. Usualmente, para uso industrial destes compostos, é necessário obtê-los puros, para garantir a sua atuação sem interferência. Um método que vem recebendo destaque especialmente nos últimos 10 anos é o uso da cromatografia de troca iônica em leito expandido, que combina em uma única etapa os passos de clarificação, concentração e purificação da molécula alvo, reduzindo assim o tempo de operação e também os custos com equipamentos para realização de cada etapa em separado. Combinado a este fato, a última década também é marcada por trabalhos que tratam da modelagem matemática do processo de adsorção de proteínas em resinas. Está técnica, além de fornecer informações importantes sobre o processo de adsorção, também é de grande valia na otimização da etapa de adsorção, uma vez que permite que simulações sejam feitas, sem a necessidade de gasto de tempo e material com experimentos em bancada, especialmente se é desejado uma ampliação de escala. Dessa forma, o objetivo desta tese foi realizar a modelagem e simulação do processo de adsorção de bioprodutos em um caldo bruto na presença de células, usando inulinase e C-ficocianina como objeto de estudo e purificar C-ficocianina utilizando resina de troca iônica em leito expandido. A presente tese foi então dividida em quatro artigos. O primeiro artigo teve como objeto de estudo a enzima inulinase, e a otimização da etapa de adsorção desta enzima em resina de troca iônica Streamline SP, em leito expandido, foi feita através da modelagem matemática e simulação das curvas de ruptura em três diferentes graus de expansão (GE). As máximas eficiências foram observadas quando utilizadas maiores concentrações de inulinase (120 a 170 U/mL), e altura de leito entre 20 e 30 cm. O grau de expansão de 3,0 vezes foi considerado o melhor, uma vez que a produtividade foi consideravelmente superior. O segundo artigo apresenta o estudo das condições de adsorção de C-ficocianina em resina de troca iônica, onde foi verificado o efeito do pH e temperatura na adsorção e após construída a isoterma de adsorção. A isoterma de adsorção da C-ficocianina em resina Streamline Q XL feita em pH 7,5 e a 25°C (ambiente), apresentou um bom ajuste ao modelo de Langmuir (R=0,98) e os valores qm (capacidade máxima de adsorção) e Kd (constante de equilíbrio) estimados pela equação linearizada da isoterma, foram de 26,7 mg/mL e 0,067mg/mL. O terceiro artigo aborda a modelagem do processo de adsorção de extrato não clarificado de C-ficocianina em resina de troca iônica Streamline Q XL em coluna de leito expandido. Três curvas de ruptura foram feitas em diferentes graus de expansão (2,0, 2,5 e 3,0). A condição de adsorção de extrato bruto não clarificado de C-ficocianina que se mostrou mais vantajosa, por apresentar maior capacidade de adsorção, é quando se alimenta o extrato até atingir 10% de saturação da resina, em grau de expansão 2,0, com uma altura inicial de leito de 30 cm. O último artigo originado nesta tese foi sobre a purificação de C-ficocianina através da cromatografia de troca iônica em leito expandido. Uma vez que a adsorção já havia sido estudada no artigo 2, o artigo 4 enfoca na otimização das condições de eluição, visando obter um produto com máxima pureza e recuperação. A pureza é dada pela razão entre a absorbância a 620 nm pela absorbância a 280 nm, e dizse que quando C-ficocianina apresenta pureza superior a 0,7 ela pode ser usada em como corante em alimentos. A avaliação das curvas de contorno indicou que a faixa de trabalho deve ser em pH ao redor de 6,5 e volumes de eluição próximos a 150 mL. Tais condições combinadas a uma etapa de pré-eluição com 0,1M de NaCl, permitiu obter C-ficocianina com pureza de 2,9, concentração 3 mg/mL, e recuperação ao redor de 70%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research is to study sedimentation mechanism by mathematical modeling in access channels which are affected by tidal currents. The most important factor for recognizing sedimentation process in every water environment is the flow pattern of that environment. It is noteworthy that the flow pattern is affected by the geometry and the shape of the environment as well as the type of existing affects in area. The area under the study in this thesis is located in Bushehr Gulf and the access channels (inner and outer). The study utilizes the hydrodynamic modeling with unstructured triangular and non-overlapping grids, using the finite volume, From method analysis in two scale sizes: large scale (200 m to 7.5km) and small scale (50m to 7.5km) in two different time durations of 15 days and 3.5 days to obtain the flow patterns. The 2D governing equations used in the model are the Depth-Averaged Shallow Water Equations. Turbulence Modeling is required to calculate the Eddy Viscosity Coefficient using the Smagorinsky Model with coefficient of 0.3. In addition to the flow modeling in two different scales and the use of the data of 3.5 day tidal current modeling have been considered to study the effects of the sediments equilibrium in the area and the channels. This model is capable of covering the area which is being settled and eroded and to identify the effects of tidal current of these processes. The required data of the above mentioned models such as current and sediments data have been obtained by the measurements in Bushehr Gulf and the access channels which was one of the PSO's (Port and Shipping Organization) project-titled, "The Sedimentation Modeling in Bushehr Port" in 1379. Hydrographic data have been obtained from Admiralty maps (2003) and Cartography Organization (1378, 1379). The results of the modeling includes: cross shore currents in northern and north western coasts of Bushehr Gulf during the neap tide and also the same current in northern and north eastern coasts of the Gulf during the spring tide. These currents wash and carry fine particles (silt, clay, and mud) from the coastal bed of which are generally made of mud and clay with some silts. In this regard, the role of sediments in the islands of this area and the islands made of depot of dredged sediments should not be ignored. The result of using 3.5 day modeling is that the cross channels currents leads to settlement places in inner and outer channels in tidal period. In neap tide the current enters the channel from upside bend of the two channels and outer channel. Then it crosses the channel oblique in some places of the outer channel. Also the oblique currents or even almost perpendicular current from up slope of inner channel between No. 15 and No. 18 buoys interact between the parallel currents in the channel and made secondary oblique currents which exit as a down-slope current in the channel and causes deposit of sediments as well as settling the suspended sediments carried by these currents. In addition in outer channel the speed of parallel currents in the bend of the channel which is naturally deeper increases. Therefore, it leads to erosion and suspension of sediments in this area. The speed of suspended sediments carried by this current which is parallel to the channel axis decreases when they pass through the shallower part of the channel where it is in the buoys No.7 and 8 to 5 and 6 are located. Therefore, the suspended sediment settles and because of this process these places will be even shallower. Furthermore, the passing of oblique upstream leads to settlement of the sediments in the up-slope and has an additional effect on the process of decreasing the depth of these locations. On the contrary, in the down-slope channel, as the results of sediments and current modeling indicates the speed of current increases and the currents make the particles of down-slope channel suspended and be carried away. Thus, in a vast area of downstream of both channels, the sediments have settled. At the end of the neap tide, the process along with circulations in this area produces eddies which causes sedimentation in the area. During spring some parts of this active location for sedimentation will enter both channels in a reverse process. The above mentioned processes and the places of sedimentation and erosion in inner and outer channels are validated by the sediments equilibrium modeling. This model will be able to estimate the suspended, bed load and the boundary layer thickness in each point of both channels and in the modeled area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Humanas, Departamento de Geografia, 2015.