991 resultados para Plane Fracture Problem
Resumo:
In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory
Resumo:
Osteoporotic hip fractures increase dramatically with age and are responsible for considerable morbidity and mortality. Several treatments to prevent the occurrence of hip fracture have been validated in large randomized trials and the current challenge is to improve the identification of individuals at high risk of fracture who would benefit from therapeutic or preventive intervention. We have performed an exhaustive literature review on hip fracture predictors, focusing primarily on clinical risk factors, dual X-ray absorptiometry (DXA), quantitative ultrasound, and bone markers. This review is based on original articles and meta-analyses. We have selected studies that aim both to predict the risk of hip fracture and to discriminate individuals with or without fracture. We have included only postmenopausal women in our review. For studies involving both men and women, only results concerning women have been considered. Regarding clinical factors, only prospective studies have been taken into account. Predictive factors have been used as stand-alone tools to predict hip fracture or sequentially through successive selection processes or by combination into risk scores. There is still much debate as to whether or not the combination of these various parameters, as risk scores or as sequential or concurrent combinations, could help to better predict hip fracture. There are conflicting results on whether or not such combinations provide improvement over each method alone. Sequential combination of bone mineral density and ultrasound parameters might be cost-effective compared with DXA alone, because of fewer bone mineral density measurements. However, use of multiple techniques may increase costs. One problem that precludes comparison of most published studies is that they use either relative risk, or absolute risk, or sensitivity and specificity. The absolute risk of individuals given their risk factors and bone assessment results would be a more appropriate model for decision-making than relative risk. Currently, a group appointed by the World Health Organization and lead by Professor John Kanis is working on such a model. It will therefore be possible to further assess the best choice of threshold to optimize the number of women needed to screen for each country and each treatment.
Resumo:
The sparsely spaced highly permeable fractures of the granitic rock aquifer at Stang-er-Brune (Brittany, France) form a well-connected fracture network of high permeability but unknown geometry. Previous work based on optical and acoustic logging together with single-hole and cross-hole flowmeter data acquired in 3 neighbouring boreholes (70-100 m deep) has identified the most important permeable fractures crossing the boreholes and their hydraulic connections. To constrain possible flow paths by estimating the geometries of known and previously unknown fractures, we have acquired, processed and interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz antennas. The GPR data processing scheme consisting of timezero corrections, scaling, bandpass filtering and F-X deconvolution, eigenvector filtering, muting, pre-stack Kirchhoff depth migration and stacking was used to differentiate fluid-filled fracture reflections from source generated noise. The final stacked and pre-stack depth-migrated GPR sections provide high-resolution images of individual fractures (dipping 30-90°) in the surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas) of each borehole in a 2D plane projection that are of superior quality to those obtained from single-offset sections. Most fractures previously identified from hydraulic testing can be correlated to reflections in the single-hole data. Several previously unknown major near vertical fractures have also been identified away from the boreholes.
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.
Resumo:
Osteoporosis of elderly is a growing medical, economic and health-care problem. It is due to the increase of the life expectancy and the number of osteoporotic fractures. With the new Swiss-specific tool FRAX and the development of inpatients fracture trajectory, we can better identify patients with high risk of fracture. An appropriate treatment can be proposed more quickly. The follow-up of bone markers increases the treatment efficiency. With a better identification, treatment and follow-up of osteoporosis of elderly patients, we can ameliorate the patient's quality of life and decrease the number of osteoporotic fractures with a good cost-effectiveness ratio.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
We use wave packet mode quantization to compute the creation of massless scalar quantum particles in a colliding plane wave spacetime. The background spacetime represents the collision of two gravitational shock waves followed by trailing gravitational radiation which focus into a Killing-Cauchy horizon. The use of wave packet modes simplifies the problem of mode propagation through the different spacetime regions which was previously studied with the use of monochromatic modes. It is found that the number of particles created in a given wave packet mode has a thermal spectrum with a temperature which is inversely proportional to the focusing time of the plane waves and which depends on the mode trajectory.
Resumo:
In the n{body problem a central con guration is formed when the position vector of each particle with respect to the center of mass is a common scalar multiple of its acceleration vector. Lindstrom showed for n = 3 and for n > 4 that if n ? 1 masses are located at xed points in the plane, then there are only a nite number of ways to position the remaining nth mass in such a way that they de ne a central con guration. Lindstrom leaves open the case n = 4. In this paper we prove the case n = 4 using as variables the mutual distances between the particles.
On the existence of bi-pyramidal central configurations of the n + 2-body problem with an n-gon base
Resumo:
Abstract. In this paper we prove the existence of central con gurations of the n + 2{body problem where n equal masses are located at the vertices of a regular n{gon and the remaining 2 masses, which are not necessarily equal, are located on the straight line orthogonal to the plane containing the n{gon passing through its center. Here this kind of central con gurations is called bi{pyramidal central con gurations. In particular, we prove that if the masses mn+1 and mn+2 and their positions satisfy convenient relations, then the con guration is central. We give explicitly those relations.
Resumo:
A major problem with holographic optical tweezers (HOTs) is their incompatibility with laser-based position detection methods, such as back-focal-plane interferometry (BFPI). The alternatives generally used with HOTs, like high-speed video tracking, do not offer the same spatial and temporal bandwidths. This has limited the use of this technique in precise quantitative experiments. In this paper, we present an optical trap design that combines digital holography and back-focal-plane displacement detection. We show that, with a particularly simple setup, it is possible to generate a set of multiple holographic traps and an additional static non-holographic trap with orthogonal polarizations and that they can be, therefore, easily separated for measuring positions and forces with the high positional and temporal resolutions of laser-based detection. We prove that measurements from both polarizations contain less than 1% crosstalk and that traps in our setup are harmonic within the typical range. We further tested the instrument in a DNA stretching experiment and we discuss an interesting property of this configuration: the small drift of the differential signal between traps.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
Affiliation: Pierre Dagenais : Hôpital Maisonneuve-Rosemont, Faculté de médecine, Université de Montréal
Resumo:
Investigations on the fracture behaviour of polymer blends is the topic of this thesis. The blends selected are PP/HDPE and PS/HIPS. PP/HDPE blend is chosen due to its commercial importance and PS/HIPS blend is selected to study the transition from brittle fracture to ductile fracture.PP/HDPE blends were prepared at different compositions by melt blending at 180°C and fracture failure process was investigated by conducting notch sensitivity test and tensile test at different strain rates. The effects of two types of modifiers (particulate and elastomer) on the fracture behaviour and notch sensitivity of PP/HDPE blends were studied. The modifiers used are calcium carbonate, a hard particulate filler commonly used in plastics and Ethylene Propylene Diene Monomer (EPDM). They were added in 2%, 4% and 6% by weight of the blends.The study shows that the mechanical properties of PP/HDPE blends can be optimized by selecting proper blend compositions. The selected modifiers are found to alter and improve the fracture behaviour and notch sensitivity of the blends. Particulate fillers like calcium carbonate can be used for making the mechanical behaviour more stable at the various blend compositions. The resistance to notch sensitivity of the blends is found to be marginally lower in the presence of calcium carbonate. The elastomeric modifier EPDM produces a better stability of the mechanical behaviour. A low concentration of EPDM is sufficient to effect such a change. EPDM significantly improves the resistance to notch sensitivity of the blends. The study shows that judicious selection of modifiers can improve the fracture behaviour and notch sensitivity of PP/HDPE blends and help these materials to be used for critical applications.For investigating the transition in fracture behaviour and failure modes, PS/HIPS blends were selected. The blends were prepared by melt mixing followed by injection moulding to prepare the specimens for conducting tensile, impact and flexure tests. These tests were used to simulate the various conditions which promote failure.The tensile behaviour of unnotched and notched PS/HIPS blend samples were evaluated at slow speeds. Tensile strengths and moduli were found to increase at the higher testing speed for all the blend combinations whereas maximum strain at break was found to decrease. For a particular speed of testing, the tensile strength and modulus show only a very slight decrease as HIPS content is increased up to about 40%. However, there is a drastic decrease on increasing the HIPS content thereafter.The maximum strain at break shows only a very slight change up to about 40% HIPS content and thereafter shows a remarkable increase. The notched specimens also follow a comparable trend even though the notch sensitivity is seen high for PS rich blends containing up to 40% HIPS. The notch sensitivity marginally decreases with increase in HIPS content. At the same time, it is found to increase with the increase in strain rate. It is observed that blends containing more than 40% HIPS fail in ductile mode.The impact characteristics of PSIHIPS blends studied were impact strength, the energy absorbed by the test specimen and impact toughness. Remarkable increase in impact strength is observed as HIPS content in the blend exceeds 40%. The energy absorbed by the test specimens and the impact toughness also show a comparable trend.Flexural testing which helps to characterize the load bearing capacity was conducted on PS/HIPS blend samples at the two different testing speeds of 5mmlmin and 10 mm/min. The flexural strength increases with increase in testing speed for all the blend compositions. At both the speeds, remarkable reduction in flexural strength is observed as HIPS content in the blend exceeds 40%. The flexural strain and flexural energy absorbed by the specimens are found to increase with increase in HIPS content. At both the testing speeds, brittle fracture is observed for PS rich blends whereas HIPS rich blends show ductile mode of failure.Photoelastic investigations were conducted on PS/HIPS blend samples to analyze their failure modes. A plane polariscope with a broad source of light was utilized for the study. The coloured isochromatic fringes formed indicate the presence of residual stress concentration in the blend samples. The coverage made by the fringes on the test specimens varies with the blend composition and it shows a reducing trend with the increase in HIPS content. This indicates that the presence of residual stress is a contributing factor leading to brittle fracture in PS rich blends and this tendency gradually falls with increase in HIPS content and leads to their ductile mode of failure.
Resumo:
In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s