948 resultados para cosmological parameters from CMBR
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
Due to huge popularity of portable terminals based on Wireless LANs and increasing demand for multimedia services from these terminals, the earlier structures and protocols are insufficient to cover the requirements of emerging networks and communications. Most research in this field is tailored to find more efficient ways to optimize the quality of wireless LAN regarding the requirements of multimedia services. Our work is to investigate the effects of modulation modes at the physical layer, retry limits at the MAC layer and packet sizes at the application layer over the quality of media packet transmission. Interrelation among these parameters to extract a cross-layer idea will be discussed as well. We will show how these parameters from different layers jointly contribute to the performance of service delivery by the network. The results obtained could form a basis to suggest independent optimization in each layer (an adaptive approach) or optimization of a set of parameters from different layers (a cross-layer approach). Our simulation model is implemented in the NS-2 simulator. Throughput and delay (latency) of packet transmission are the quantities of our assessments. © 2010 IEEE.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
Multi-frequency Eddy Current (EC) inspection with a transmit-receive probe (two horizontally offset coils) is used to monitor the Pressure Tube (PT) to Calandria Tube (CT) gap of CANDU® fuel channels. Accurate gap measurements are crucial to ensure fitness of service; however, variations in probe liftoff, PT electrical resistivity, and PT wall thickness can generate systematic measurement errors. Validated mathematical models of the EC probe are very useful for data interpretation, and may improve the gap measurement under inspection conditions where these parameters vary. As a first step, exact solutions for the electromagnetic response of a transmit-receive coil pair situated above two parallel plates separated by an air gap were developed. This model was validated against experimental data with flat-plate samples. Finite element method models revealed that this geometrical approximation could not accurately match experimental data with real tubes, so analytical solutions for the probe in a double-walled pipe (the CANDU® fuel channel geometry) were generated using the Second-Order Vector Potential (SOVP) formalism. All electromagnetic coupling coefficients arising from the probe, and the layered conductors were determined and substituted into Kirchhoff’s circuit equations for the calculation of the pickup coil signal. The flat-plate model was used as a basis for an Inverse Algorithm (IA) to simultaneously extract the relevant experimental parameters from EC data. The IA was validated over a large range of second layer plate resistivities (1.7 to 174 µΩ∙cm), plate wall thickness (~1 to 4.9 mm), probe liftoff (~2 mm to 8 mm), and plate-to plate gap (~0 mm to 13 mm). The IA achieved a relative error of less than 6% for the extracted FP resistivity and an accuracy of ±0.1 mm for the LO measurement. The IA was able to achieve a plate gap measurement with an accuracy of less than ±0.7 mm error over a ~2.4 mm to 7.5 mm probe liftoff and ±0.3 mm at nominal liftoff (2.42±0.05 mm), providing confidence in the general validity of the algorithm. This demonstrates the potential of using an analytical model to extract variable parameters that may affect the gap measurement accuracy.
Resumo:
Concrete solar collectors offer a type of solar collector with structural, aesthetic and economic advantages over current populartechnologies. This study examines the influential parameters of concrete solar collectors. In addition to the external conditions,the performance of a concrete solar collector is influenced by the thermal properties of the concrete matrix, piping network andfluid. Geometric and fluid flow parameters also influence the performance of the concrete solar collector. A literature review ofconcrete solar collectors is conducted in order to define the benchmark parameters from which individual parameters are thencompared. The numerical model consists of a 1D pipe flow network coupled with the heat transfer in a 3D concrete domain. Thispaper is concerned with the physical parameters that define the concrete solar collector, thus a constant surface temperature isused as the exposed surface boundary condition with all other surfaces being insulated. Results show that, of the parametersinvestigated, the pipe spacing, ps, concrete conductivity, kc, and the pipe embedment depth, demb, are among those parameterswhich have greatest effect on the collector’s performance. The optimum balance between these parameters is presented withrespect to the thermal performance and discussed with reference to practical development issues.
Resumo:
Semiconductor lasers have the potential to address a number of critical applications in advanced telecommunications and signal processing. These include applications that require pulsed output that can be obtained from self-pulsing and mode-locked states of two-section devices with saturable absorption. Many modern applications place stringent performance requirements on the laser source, and a thorough understanding of the physical mechanisms underlying these pulsed modes of operation is therefore highly desirable. In this thesis, we present experimental measurements and numerical simulations of a variety of self-pulsation phenomena in two-section semiconductor lasers with saturable absorption. Our theoretical and numerical results will be based on rate equations for the field intensities and the carrier densities in the two sections of the device, and we establish typical parameter ranges and assess the level of agreement with experiment that can be expected from our models. For each of the physical examples that we consider, our model parameters are consistent with the physical net gain and absorption of the studied devices. Following our introductory chapter, the first system that we consider is a two-section Fabry-Pérot laser. This example serves to introduce our method for obtaining model parameters from the measured material dispersion, and it also allows us to present a detailed discussion of the bifurcation structure that governs the appearance of selfpulsations in two-section devices. In the following two chapters, we present two distinct examples of experimental measurements from dual-mode two-section devices. In each case we have found that single mode self-pulsations evolve into complex coupled dualmode states following a characteristic series of bifurcations. We present optical and mode resolved power spectra as well as a series of characteristic intensity time traces illustrating this progression for each example. Using the results from our study of a twosection Fabry-Pérot device as a guide, we find physically appropriate model parameters that provide qualitative agreement with our experimental results. We highlight the role played by material dispersion and the underlying single mode self-pulsing orbits in determining the observed dynamics, and we use numerical continuation methods to provide a global picture of the governing bifurcation structure. In our concluding chapter we summarise our work, and we discuss how the presented results can inform the development of optimised mode-locked lasers for performance applications in integrated optics.
Resumo:
Cranial cruciate ligament (CCL) deficiency is the leading cause of lameness affecting the stifle joints of large breed dogs, especially Labrador Retrievers. Although CCL disease has been studied extensively, its exact pathogenesis and the primary cause leading to CCL rupture remain controversial. However, weakening secondary to repetitive microtrauma is currently believed to cause the majority of CCL instabilities diagnosed in dogs. Techniques of gait analysis have become the most productive tools to investigate normal and pathological gait in human and veterinary subjects. The inverse dynamics analysis approach models the limb as a series of connected linkages and integrates morphometric data to yield information about the net joint moment, patterns of muscle power and joint reaction forces. The results of these studies have greatly advanced our understanding of the pathogenesis of joint diseases in humans. A muscular imbalance between the hamstring and quadriceps muscles has been suggested as a cause for anterior cruciate ligament rupture in female athletes. Based on these findings, neuromuscular training programs leading to a relative risk reduction of up to 80% has been designed. In spite of the cost and morbidity associated with CCL disease and its management, very few studies have focused on the inverse dynamics gait analysis of this condition in dogs. The general goals of this research were (1) to further define gait mechanism in Labrador Retrievers with and without CCL-deficiency, (2) to identify individual dogs that are susceptible to CCL disease, and (3) to characterize their gait. The mass, location of the center of mass (COM), and mass moment of inertia of hind limb segments were calculated using a noninvasive method based on computerized tomography of normal and CCL-deficient Labrador Retrievers. Regression models were developed to determine predictive equations to estimate body segment parameters on the basis of simple morphometric measurements, providing a basis for nonterminal studies of inverse dynamics of the hind limbs in Labrador Retrievers. Kinematic, ground reaction forces (GRF) and morphometric data were combined in an inverse dynamics approach to compute hock, stifle and hip net moments, powers and joint reaction forces (JRF) while trotting in normal, CCL-deficient or sound contralateral limbs. Reductions in joint moment, power, and loads observed in CCL-deficient limbs were interpreted as modifications adopted to reduce or avoid painful mobilization of the injured stifle joint. Lameness resulting from CCL disease affected predominantly reaction forces during the braking phase and the extension during push-off. Kinetics also identified a greater joint moment and power of the contralateral limbs compared with normal, particularly of the stifle extensor muscles group, which may correlate with the lameness observed, but also with the predisposition of contralateral limbs to CCL deficiency in dogs. For the first time, surface EMG patterns of major hind limb muscles during trotting gait of healthy Labrador Retrievers were characterized and compared with kinetic and kinematic data of the stifle joint. The use of surface EMG highlighted the co-contraction patterns of the muscles around the stifle joint, which were documented during transition periods between flexion and extension of the joint, but also during the flexion observed in the weight bearing phase. Identification of possible differences in EMG activation characteristics between healthy patients and dogs with or predisposed to orthopedic and neurological disease may help understanding the neuromuscular abnormality and gait mechanics of such disorders in the future. Conformation parameters, obtained from femoral and tibial radiographs, hind limb CT images, and dual-energy X-ray absorptiometry, of hind limbs predisposed to CCL deficiency were compared with the conformation parameters from hind limbs at low risk. A combination of tibial plateau angle and femoral anteversion angle measured on radiographs was determined optimal for discriminating predisposed and non-predisposed limbs for CCL disease in Labrador Retrievers using a receiver operating characteristic curve analysis method. In the future, the tibial plateau angle (TPA) and femoral anteversion angle (FAA) may be used to screen dogs suspected of being susceptible to CCL disease. Last, kinematics and kinetics across the hock, stifle and hip joints in Labrador Retrievers presumed to be at low risk based on their radiographic TPA and FAA were compared to gait data from dogs presumed to be predisposed to CCL disease for overground and treadmill trotting gait. For overground trials, extensor moment at the hock and energy generated around the hock and stifle joints were increased in predisposed limbs compared to non predisposed limbs. For treadmill trials, dogs qualified as predisposed to CCL disease held their stifle at a greater degree of flexion, extended their hock less, and generated more energy around the stifle joints while trotting on a treadmill compared with dogs at low risk. This characterization of the gait mechanics of Labrador Retrievers at low risk or predisposed to CCL disease may help developing and monitoring preventive exercise programs to decrease gastrocnemius dominance and strengthened the hamstring muscle group.
Resumo:
Hydroxymethylnitrofurazone (NFOH) is a prodrug that is active against Trypanosoma cruzi. It however presents low solubility and high toxicity. Hydroxypropyl-beta-cyclodextrin (HP-beta-CD) can be used as a drug-delivery system for NFOH modifying its physico-chemical properties. The aim of this work is to characterize the inclusion complex between NFOH and HP-beta-CD. The rate of NFOH release decreases after complexation and thermodynamic parameters from the solubility isotherm studies revealed that a stable complex is formed (deltaGº= 1.7 kJ/mol). This study focuses on the physico-chemical characterization of a drug-delivery formulation that comes out as a potentially new therapeutic option for Chagas disease treatment.
Resumo:
Nonpoint sources (NPS) pollution from agriculture is the leading source of water quality impairment in U.S. rivers and streams, and a major contributor to lakes, wetlands, estuaries and coastal waters (U.S. EPA 2016). Using data from a survey of farmers in Maryland, this dissertation examines the effects of a cost sharing policy designed to encourage adoption of conservation practices that reduce NPS pollution in the Chesapeake Bay watershed. This watershed is the site of the largest Total Maximum Daily Load (TMDL) implemented to date, making it an important setting in the U.S. for water quality policy. I study two main questions related to the reduction of NPS pollution from agriculture. First, I examine the issue of additionality of cost sharing payments by estimating the direct effect of cover crop cost sharing on the acres of cover crops, and the indirect effect of cover crop cost sharing on the acres of two other practices: conservation tillage and contour/strip cropping. A two-stage simultaneous equation approach is used to correct for voluntary self-selection into cost sharing programs and account for substitution effects among conservation practices. Quasi-random Halton sequences are employed to solve the system of equations for conservation practice acreage and to minimize the computational burden involved. By considering patterns of agronomic complementarity or substitution among conservation practices (Blum et al., 1997; USDA SARE, 2012), this analysis estimates water quality impacts of the crowding-in or crowding-out of private investment in conservation due to public incentive payments. Second, I connect the econometric behavioral results with model parameters from the EPA’s Chesapeake Bay Program to conduct a policy simulation on water quality effects. I expand the econometric model to also consider the potential loss of vegetative cover due to cropland incentive payments, or slippage (Lichtenberg and Smith-Ramirez, 2011). Econometric results are linked with the Chesapeake Bay Program watershed model to estimate the change in abatement levels and costs for nitrogen, phosphorus and sediment under various behavioral scenarios. Finally, I use inverse sampling weights to derive statewide abatement quantities and costs for each of these pollutants, comparing these with TMDL targets for agriculture in Maryland.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
Introducción. Los conductores de transporte terrestre de pasajeros están expuestos a factores de riesgo inherentes a su labor, por lo que la intervención sobre estos factores es un aspecto relevante en las empresas de transporte público dado que dicha actividad afecta la calidad de vida de los mismos. Objetivo: Determinar la prevalencia de estrés en el lugar de trabajo y los factores de riesgo biomecánicos asociados en trabajadores de una empresa de transporte terrestre de pasajeros. Materiales y métodos: Estudio de corte transversal con datos secundarios procedentes de una población de 219 empleados, de los cuales 13 eran administrativos y 206 laboraban en la operación de una empresa de transporte terrestre de pasajeros. Las variables incluidas fueron socio demográficas, laborales, variables relacionadas con la medición de estrés y síntomas osteomusculares. El análisis estadístico incluyó medidas de tendencia central y dispersión y para identificar los factores asociados con el estrés se utilizaron pruebas de asociación Chi2 y prueba exacta de Fisher. Resultados: La edad promedio de los participantes fue de 43 años (DS 10 años), siendo en su mayoría trabajadores de sexo masculino (96,3%). Se presentaron síntomas y factores de riesgo biomecánicos en cuello y espalda en un 55.5%. Se encontró asociación significativa entre estrés con los síntomas en pies (p=0,009), con los factores de riesgo biomecánicos, se encontró relación significativa con el tiempo que permanece adoptando las posturas de inclinación hacia delante (p=0,000) y hacia atrás (p=0,001) de espalda/tronco y las posturas en muñecas, (p=0,000), y a la exposición de los conductores a superficies vibrantes (asientos de vehículo) (p=0,021). No se encontró asociación significativa entre estrés y la postura de sedente. Conclusiones: Con este estudio se encontró una prevalencia de estrés de 78% en el lugar de trabajo y de los factores de riesgo biomecánicos asociados a antigüedad, postura y repetitividad de movimientos, con repercusiones en cuello y espalda lumbar, por lo tanto, se requiere de un seguimiento a las condiciones de salud y trabajo para los empleados del sector transporte.
Resumo:
The metapopulation paradigm is central in ecology and conservation biology to understand the dynamics of spatially-structured populations in fragmented landscapes. Metapopulations are often studied using simulation modelling, and there is an increasing demand of user-friendly software tools to simulate metapopulation responses to environmental change. Here we describe the MetaLandSim R package, mwhich integrates ideas from metapopulation and graph theories to simulate the dynamics of real and virtual metapopulations. The package offers tools to (i) estimate metapopulation parameters from empirical data, (ii) to predict variation in patch occupancy over time in static and dynamic landscapes, either real or virtual, and (iii) to quantify the patterns and speed of metapopulation expansion into empty landscapes. MetaLandSim thus provides detailed information on metapopulation processes, which can be easily combined with land use and climate change scenarios to predict metapopulation dynamics and range expansion for a variety of taxa and ecological systems.
Resumo:
Experiments were undertaken to study effect of initial conditions on the expansion ratio of two grains in a laboratory scale, single speed, single screw extruder at Naresuan University, Thailand. Jasmine rice and Mung bean were used as the material. Three different initial moisture contents were adjusted for the grains and classified them into three groups according to particle sizes. Mesh sizes used are 12 and 14. Expansion ratio was measured at a constant barrel temperature of 190oC. Response surface methodology was used to obtain optimum conditions between moisture content and particle size of the materials concerned.