980 resultados para Euclidean plane,
Resumo:
R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets
Resumo:
This paper focuses on the problem of realizing a plane-to-plane virtual link between a camera attached to the end-effector of a robot and a planar object. In order to do the system independent to the object surface appearance, a structured light emitter is linked to the camera so that 4 laser pointers are projected onto the object. In a previous paper we showed that such a system has good performance and nice characteristics like partial decoupling near the desired state and robustness against misalignment of the emitter and the camera (J. Pages et al., 2004). However, no analytical results concerning the global asymptotic stability of the system were obtained due to the high complexity of the visual features utilized. In this work we present a better set of visual features which improves the properties of the features in (J. Pages et al., 2004) and for which it is possible to prove the global asymptotic stability
Resumo:
Due to SNR constraints, current "bright-blood" 3D coronary MRA approaches still suffer from limited spatial resolution when compared to conventional x-ray coronary angiography. Recent 2D fast spin-echo black-blood techniques maximize signal for coronary MRA at no loss in image spatial resolution. This suggests that the extension of black-blood coronary MRA with a 3D imaging technique would allow for a further signal increase, which may be traded for an improved spatial resolution. Therefore, a dual-inversion 3D fast spin-echo imaging sequence and real-time navigator technology were combined for high-resolution free-breathing black-blood coronary MRA. In-plane image resolution below 400 microm was obtained. Magn Reson Med 45:206-211, 2001.
Resumo:
Cross-reactivity of plant foods is an important phenomenon in allergy, with geographical variations with respect to the number and prevalence of the allergens involved in this process, whose complexity requires detailed studies. We have addressed the role of thaumatin-like proteins (TLPs) in cross-reactivity between fruit and pollen allergies. A representative panel of 16 purified TLPs was printed onto an allergen microarray. The proteins selected belonged to the sources most frequently associated with peach allergy in representative regions of Spain. Sera from two groups of well characterized patients, one with allergy to Rosaceae fruit (FAG) and another against pollens but tolerant to food-plant allergens (PAG), were obtained from seven geographical areas with different environmental pollen profiles. Cross-reactivity between members of this family was demonstrated by inhibition assays. Only 6 out of 16 purified TLPs showed noticeable allergenic activity in the studied populations. Pru p 2.0201, the peach TLP (41%), chestnut TLP (24%) and plane pollen TLP (22%) proved to be allergens of probable relevance to fruit allergy, being mainly associated with pollen sensitization, and strongly linked to specific geographical areas such as Barcelona, Bilbao, the Canary Islands and Madrid. The patients exhibited >50% positive response to Pru p 2.0201 and to chestnut TLP in these specific areas. Therefore, their recognition patterns were associated with the geographical area, suggesting a role for pollen in the sensitization of these allergens. Finally, the co-sensitizations of patients considering pairs of TLP allergens were analyzed by using the co-sensitization graph associated with an allergen microarray immunoassay. Our data indicate that TLPs are significant allergens in plant food allergy and should be considered when diagnosing and treating pollen-food allergy.
Resumo:
Introduction: Ankle arthropathy is associated with a decreased motion of the ankle-hindfoot during ambulation. Ankle arthrodesis was shown to result in degeneration of the neighbour joints of the foot. Inversely, total ankle arthroplasty conceptually preserves the adjacent joints because of the residual mobility of the ankle but this has not been demonstrated yet in vivo. It has also been reported that degenerative ankle diseases, and even arthrodesis, do not result in alteration of the knee and hip joints. We present the preliminary results of a new approach of this problem based on ambulatory gait analysis. Patients and Methods: Motion analysis of the lower limbs was performed using a Physilog® (BioAGM, CH) system consisting of three-dimensional (3D) accelerometer and gyroscope, coupled to a magnetic system (Liberty©, Polhemus, USA). Both systems have been validated. Three groups of two patients were included into this pilot study and compared to healthy subjects (controls) during level walking: patients with ankle osteoarthritis (group 1), patients treated by ankle arthrodesis (group 2), patients treated by total ankle prosthesis (group 3). Results: Motion patterns of all analyzed joints over more than 20 gait cycles in each subject were highly repeatable. Motion amplitude of the ankle-hindfoot in control patients was similar to recently reported results. Ankle arthrodesis limited the motion of the ankle-hindfoot in the sagittal and horizontal planes. The prosthetic ankle allowed a more physiologic movement in the sagittal plane only. Ankle arthritis and its treatments did not influence the range of motion of the knee and hip joint during stance phase, excepted for a slight decrease of the hip flexion in groups 1 and 2. Conclusion: The reliability of the system was shown by the repeatability of the consecutive measurements. The results of this preliminary study were similar to those obtained through laboratory gait analysis. However, our system has the advantage to allow ambulatory analysis of 3D kinematics of the lower limbs outside of a gait laboratory and in real life conditions. To our knowledge this is a new concept in the analysis of ankle arthropathy and its treatments. Therefore, there is a potential to address specific questions like the difficult comparison of the benefits of ankle arthroplasty versus arthrodesis. The encouraging results of this pilot study offer the perspective to analyze the consequences of ankle arthropathy and its treatments on the biomechanics of the lower limbs ambulatory, in vivo and in daily life conditions.
Resumo:
BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Partint de les definicions usuals de Mesures de Semblança Quàntica (MSQ), es considera la dependència d'aquestes mesures respecte de la superposició molecular. Pel cas particular en qnè els sistemes comparats siguin una molècula i un Àtom i que les mesures es calculin amb l'aproximació EASA, les MSQ esdevenen funcions de les tres coordenades de l'espai. Mantenint fixa una de les tres coordenades, es pot representar fàcilment la variació del valor de semblança en un pla determinat, i obtenir els anomenats mapes de semblança. En aquest article, es comparen els mapes de semblança obtinguts amb diferents MSQ per a sistemes senzills
Resumo:
The sparsely spaced highly permeable fractures of the granitic rock aquifer at Stang-er-Brune (Brittany, France) form a well-connected fracture network of high permeability but unknown geometry. Previous work based on optical and acoustic logging together with single-hole and cross-hole flowmeter data acquired in 3 neighbouring boreholes (70-100 m deep) has identified the most important permeable fractures crossing the boreholes and their hydraulic connections. To constrain possible flow paths by estimating the geometries of known and previously unknown fractures, we have acquired, processed and interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz antennas. The GPR data processing scheme consisting of timezero corrections, scaling, bandpass filtering and F-X deconvolution, eigenvector filtering, muting, pre-stack Kirchhoff depth migration and stacking was used to differentiate fluid-filled fracture reflections from source generated noise. The final stacked and pre-stack depth-migrated GPR sections provide high-resolution images of individual fractures (dipping 30-90°) in the surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas) of each borehole in a 2D plane projection that are of superior quality to those obtained from single-offset sections. Most fractures previously identified from hydraulic testing can be correlated to reflections in the single-hole data. Several previously unknown major near vertical fractures have also been identified away from the boreholes.
Resumo:
We aimed to analyze the changes in isokinetic internal (IR) and external (ER) rotator muscles fatigue (a) in patients with non-operated recurrent anterior instability, and (b) before and after shoulder surgical stabilization with the Bristow-Latarjet procedure. Thirty-seven patients with non-operated unilateral recurrent anterior post-traumatic instability (NG) were compared with 12 healthy subjects [control group (CG)]. Twenty patients with operated recurrent anterior instability group (OG) underwent isokinetic evaluation before and 3, 6, and 21 months after Bristow-Latarjet surgery. IR and ER muscles strength was evaluated with Con-Trex® dynamometer, with subjects seated and at a 45° shoulder abduction angle in scapular plane. IR and ER muscle fatigue was determined after 10 concentric repetitions at 180° · s(-1) through the fatigue index, the percent decrease in performance (DP), and the slope of peak torque decrease. There were no differences in rotator muscles fatigue between NG and CG. In OG, 3 months post-surgery, IR DP of operated shoulder was significantly (P < 0.001) higher than presurgery and 6 and 21 months post-surgery. Rotator muscles fatigability was not associated with recurrent anterior instability. After surgical stabilization, there was a significantly higher IR fatigability in the operated shoulder 3 months post-surgery, followed by recovery evidenced 6 months post-surgery and long-term maintenance over 21 months.
Resumo:
The joint angles of multi-segment foot models have been primarily described using two mathematical methods: the joint coordinate system and the attitude vector. This study aimed to determine whether the angles obtained through these two descriptors are comparable, and whether these descriptors have similar sensitivity to experimental errors. Six subjects walked eight times on an instrumented walkway while the joint angles among shank, hindfoot, medial forefoot, and lateral forefoot were measured. The angles obtained using both descriptors and their sensitivity to experimental errors were compared. There was no overall significant difference between the ranges of motion obtained using both descriptors. However, median differences of more than 6° were noticed for the medial-lateral forefoot joint. For all joints and rotation planes, both descriptors provided highly similar angle patterns (median correlation coefficient: R>0.90), except for the medial-lateral forefoot angle in the transverse plane (median R=0.77). The joint coordinate system was significantly more sensitive to anatomical landmarks misplacement errors. However, the absolute differences of sensitivity were small relative to the joints ranges of motion. In conclusion, the angles obtained using these two descriptors were not identical, but were similar for at least the shank-hindfoot and hindfoot-medial forefoot joints. Therefore, the angle comparison across descriptors is possible for these two joints. Comparison should be done more carefully for the medial-lateral forefoot joint. Moreover, despite different sensitivities to experimental errors, the effects of the experimental errors on the angles were small for both descriptors suggesting that both descriptors can be considered for multi-segment foot models.
Resumo:
The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density
Resumo:
The information gathered with intravascular ultrasound (IVUS) are of great value in endovascular techniques. The aim of this study was to evaluate the reliability of IVUS when measuring vessel dimensions by comparison with an established reference method. The left carotid artery was exposed in 4 pigs (45-55 kg) and two piezoelectric crystals were sutured on the adventitia in the same cross-sectional plane. The distance between them was measured either by IVUS and by sonomicrometers. The mean distance between the two crystals calculated by the sonomicrometer was 4.7+/-0.4 mm (mean systolic distance was 4.9+/-0.2 mm, mean diastolic distance was 4.6+/-0.1 mm). The mean distance between the two targets calculated by IVUS was 4. 5+/-0.2 mm (mean systolic distance was 4.6+/-0.2 mm and mean diastolic 4.4+/-0.2 mm). Regression analysis of the two series of data shows a R(2)=0.9984. IVUS measurements are an average 5% smaller than sonomicrometer measurements (3.6% up to 8.3%) and the difference is statistically significant ( p <0.05). The underestimation of IVUS measurements will affect the accuracy, and probably the long-term outcome, of endovascular procedures.
Resumo:
Introduction: Several studies have reported significant alteration of the scapula-humeral rythm after total shoulder arthroplasty. However, the biomechanical and clinical effects, particularly on implants lifespan, are still unknown. The goal of this study was to evaluate the biomechanical consequences of an altered scapula-humeral rhythm. Methods: A numerical musculoskeletal model of the shoulder was used. The model included the scapula, the humerus and 6 scapulohumeral muscles: middle, anterior, and posterior deltoid, supraspinatus, subscapularis and infraspinatus combined with teres minor. Arm motion and joint stability were achieved by muscles. The reverse and anatomic Aequalis prostheses (Tornier Inc) were inserted. Two scapula-humeral rhythms were considered for each prosthesis: a normal 2:1 rhythm, and an altered 1:2 rhythm. For the 4 configurations, a movement of abduction in the scapular plane was simulated. The gleno-humeral force and contact pattern, but also the stress in the polyethylene and cement were evaluated. Results: With the anatomical prosthesis, the gleno-humeral force increased of 23% for the altered rhythm, with a more eccentric (posterior and superior) contact. The contact pressure, polyethylene stress, and cement stress increased respectively by 20%, 48% and 64%. With the reverse prosthesis, the gleno-humeral force increased of 11% for an altered rhythm. There was nearly no effect on the contact pattern on the polyethylene component surface. Conclusion: The present study showed that alteration oft the scapula-humeral rythm induced biomechanical consequences which could preclude the long term survival of the glenoid implant of anatomic prostheses. However,an altered scapula-humeral rhythm, even severe, should not be a contra indication for the use of a reverse prosthesis.
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.