963 resultados para Subpixel precision


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a developing organism, cells require information on where they are in order to differentiate into the correct cell-type. Pattern formation is the process by which cells acquire and process positional cues and thus determine their fate. This can be achieved by the production and release of a diffusible signaling molecule, called a morphogen, which forms a concentration gradient: exposure to different morphogen levels leads to the activation of specific signaling pathways. Thus, in response to the morphogen gradient, cells start to express different sets of genes, forming domains characterized by a unique combination of differentially expressed genes. As a result, a pattern of cell fates and specification emerges.Though morphogens have been known for decades, it is not yet clear how these gradients form and are interpreted in order to yield highly robust patterns of gene expression. During my PhD thesis, I investigated the properties of Bicoid (Bcd) and Decapentaplegic (Dpp), two morphogens involved in the patterning of the anterior-posterior axis of Drosophila embryo and wing primordium, respectively. In particular, I have been interested in understanding how the pattern proportions are maintained across embryos of different sizes or within a growing tissue. This property is commonly referred to as scaling and is essential for yielding functional organs or organisms. In order to tackle these questions, I analysed fluorescence images showing the pattern of gene expression domains in the early embryo and wing imaginal disc. After characterizing the extent of these domains in a quantitative and systematic manner, I introduced and applied a new scaling measure in order to assess how well proportions are maintained. I found that scaling emerged as a universal property both in early embryos (at least far away from the Bcd source) and in wing imaginal discs (across different developmental stages). Since we were also interested in understanding the mechanisms underlying scaling and how it is transmitted from the morphogen to the target genes down in the signaling cascade, I also quantified scaling in mutant flies where this property could be disrupted. While scaling is largely conserved in embryos with altered bcd dosage, my modeling suggests that Bcd trapping by the nuclei as well as pre-steady state decoding of the morphogen gradient are essential to ensure precise and scaled patterning of the Bcd signaling cascade. In the wing imaginal disc, it appears that as the disc grows, the Dpp response expands and scales with the tissue size. Interestingly, scaling is not perfect at all positions in the field. The scaling of the target gene domains is best where they have a function; Spalt, for example, scales best at the position in the anterior compartment where it helps to form one of the anterior veins of the wing. Analysis of mutants for pentagone, a transcriptional target of Dpp that encodes a secreted feedback regulator of the pathway, indicates that Pentagone plays a key role in scaling the Dpp gradient activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The establishment of legislative rules about explosives in the eighties has reduced the illicit use of military and civilian explosives. However, bomb-makers have rapidly taken advantage of substances easily accessible and intended for licit uses to produce their own explosives. This change in strategy has given rise to an increase of improvised explosive charges, which is moreover assisted by the ease of implementation of the recipes, widely available through open sources. While the nature of the explosive charges has evolved, instrumental methods currently used in routine, although more sensitive than before, have a limited power of discrimination and allow mostly the determination of the chemical nature of the substance. Isotope ratio mass spectrometry (IRMS) has been applied to a wide range of forensic materials. Conclusions drawn from the majority of the studies stress its high power of discrimination. Preliminary studies conducted so far on the isotopic analysis of intact explosives (pre-blast) have shown that samples with the same chemical composition and coming from different sources could be differentiated. The measurement of stable isotope ratios appears therefore as a new and remarkable analytical tool for the discrimination or the identification of a substance with a definite source. However, much research is still needed to assess the validity of the results in order to use them either in an operational prospect or in court. Through the isotopic study of black powders and ammonium nitrates, this research aims at evaluating the contribution of isotope ratio mass spectrometry to the investigation of explosives, both from a pre-blast and from a post-blast approach. More specifically, the goal of the research is to provide additional elements necessary to a valid interpretation of the results, when used in explosives investigation. This work includes a fundamental study on the variability of the isotopic profile of black powder and ammonium nitrate in both space and time. On one hand, the inter-variability between manufacturers and, particularly, the intra-variability within a manufacturer has been studied. On the other hand, the stability of the isotopic profile over time has been evaluated through the aging of these substances exposed to different environmental conditions. The second part of this project considers the applicability of this high-precision technology to traces and residues of explosives, taking account of the characteristics specific to the field, including their sampling, a probable isotopic fractionation during the explosion, and the interferences with the matrix of the site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work describes a fast gas chromatography/negative-ion chemical ionization tandem mass spectrometric assay (Fast GC/NICI-MS/MS) for analysis of tetrahydrocannabinol (THC), 11-hydroxy-tetrahydrocannabinol (THC-OH) and 11-nor-9-carboxy-tetrahydrocannabinol (THC-COOH) in whole blood. The cannabinoids were extracted from 500 microL of whole blood by a simple liquid-liquid extraction (LLE) and then derivatized by using trifluoroacetic anhydride (TFAA) and hexafluoro-2-propanol (HFIP) as fluorinated agents. Mass spectrometric detection of the analytes was performed in the selected reaction-monitoring mode on a triple quadrupole instrument after negative-ion chemical ionization. The assay was found to be linear in the concentration range of 0.5-20 ng/mL for THC and THC-OH, and of 2.5-100 ng/mL for THC-COOH. Repeatability and intermediate precision were found less than 12% for all concentrations tested. Under standard chromatographic conditions, the run cycle time would have been 15 min. By using fast conditions of separation, the assay analysis time has been reduced to 5 min, without compromising the chromatographic resolution. Finally, a simple approach for estimating the uncertainty measurement is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diagnosis and treatment of arterial hypertension are essential in order to reduce the mortality and the morbidity associated with this condition. The decision to treat hypertension is often based on serial office blood pressure measurements, but new non-invasive measurements such as pulse wave velocity or central blood pressure measurement using pulse wave analysis can be useful to assess the cardiovascular risk with more precision. Indeed, pulse vawe velocity, which is a marker of arterial stiffness, is an independent risk factor for future cardiovascular events. Non-pharmacological and pharmacological therapies can affect both pulse wave velocity and central pressure. However, more studies are needed in order to determine if these measurements can be use as surrogate marker of cardiovascular disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Personal results are presented to illustrate the development of immunoscintigraphy for the detection of cancer over the last 12 years, from the early experimental results in nude mice grafted with human colon carcinoma to the most modern form of immunoscintigraphy applied to patients, using I123 labeled Fab fragments from monoclonal anti-CEA antibodies detected by single photon emission computerized tomography (SPECT). The first generation of immunoscintigraphy used I131 labeled, immunoadsorbent purified, polyclonal anti-CEA antibodies and planar scintigraphy, as the detection system. The second generation used I131 labeled monoclonal anti-CEA antibodies and SPECT, while the third generation employed I123 labeled fragments of monoclonal antibodies and SPECT. The improvement in the precision of tumor images with the most recent forms of immunoscintigraphy is obvious. However, we think the usefulness of immunoscintigraphy for routine cancer management has not yet been entirely demonstrated. Further prospective trials are still necessary to determine the precise clinical role of immunoscintigraphy. A case report is presented on a patient with two liver metastases from a sigmoid carcinoma, who received through the hepatic artery a therapeutic dose (100 mCi) of I131 coupled to 40 mg of a mixture of two high affinity anti-CEA monoclonal antibodies. Excellent localisation in the metastases of the I131 labeled antibodies was demonstrated by SPECT and the treatment was well tolerated. The irradiation dose to the tumor, however, was too low at 4300 rads (with 1075 rads to the normal liver and 88 rads to the bone marrow), and no evidence of tumor regression was obtained. Different approaches for increasing the irradiation dose delivered to the tumor by the antibodies are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays typical motor junction boxes do not incorporate cable glands, which would provide good electrical performance interms of electromagnetic compatibility. In this paper, a manufacturability and assembly analysis for the new construction of a rigid body feeder cable junction of an electric motor is presented especially for converter drives (practical tests were carried out at LUT during 2007). Although the cable junction should also clamp the cable to provide enough tensile strength, the phase conductors should not get squashed by the groundingconnection. In order to ensure good performance in an electrical mean especially in converter drives, the grounding of the cable should be connected 360 degrees around the cable. In this paper, following manufacturing technologies are discussed: traditional turning, precision and centrifugal casting, and rotation moulding. DFM(A)-aspects are presented in detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'expérience Belle, située dans le centre de recherche du KEK, au Japon, est consacrée principalement à l'étude de la violation de CP dans le système des mésons B. Elle est placée sur le collisionneur KEKB, qui produit des paires Banti-B. KEKB, l'une des deux « usines à B » actuellement en fonction, détient le record du nombre d'événements produits avec plus de 150 millions de paires. Cet échantillon permet des mesures d'une grande précision dans le domaine de la physique du méson B. C'est dans le cadre de ces mesures de précision que s'inscrit cette analyse. L'un des phénomènes remarquables de la physique des hautes énergies est la faculté qu'a l'interaction faible de coupler un méson neutre avec son anti-méson. Dans le présent travail, nous nous intéressons au méson B neutre couplé à l'anti-méson B neutre, avec une fréquence d'oscillation _md mesurable précisément. Outre la beauté de ce phénomène lui-même, une telle mesure trouve sa place dans la quête de l'origine de la violation de CP. Cette dernière n'est incluse que d'une façon peu satisfaisante dans le modèle standard des interactions électro-faibles. C'est donc la recherche de phénomènes physiques encore inexpliqués qui motive en premier lieu la collaboration Belle. Il existe déjà de nombreuses mesures de _md antérieures. Celle que nous présentons ici est cependant d'une précision encore jamais atteinte grâce, d'une part, à l'excellente performance de KEKB et, d'autre part, à une approche originale qui permet de réduire considérablement la contamination de la mesure par des événements indésirés. Cette approche fut déjà mise à profit par d'autres expériences, dans des conditions quelque peu différentes de celles de Belle. La méthode utilisée consiste à reconstruire partiellement l'un des mésons dans le canal ___D*(D0_)l_l, en n'utilisant que les informations relatives au lepton l et au pion _. L'information concernant l'autre méson de la paire Banti-B initiale n'est tirée que d'un seul lepton de haute énergie. Ainsi, l'échantillon à disposition ne souffre pas de grandes réductions dues à une reconstruction complète, tandis que la contamination due aux mésons B chargés, produits par KEKB en quantité égale aux B0, est fortement diminuée en comparaison d'une analyse inclusive. Nous obtenons finalement le résultat suivant : _md = 0.513±0.006±0.008 ps^-1, la première erreur étant l'erreur statistique et la deuxième, l'erreur systématique.<br/><br/>The Belle experiment is located in the KEK research centre (Japan) and is primarily devoted to the study of CP violation in the B meson sector. Belle is placed on the KEKB collider, one of the two currently running "B-meson factories", which produce Banti-B pairs. KEKB has created more than 150 million pairs in total, a world record for this kind of colliders. This large sample allows very precise measurements in the physics of beauty mesons. The present analysis falls within the framework of these precise measurements. One of the most remarkable phenomena in high-energy physics is the ability of weak interactions to couple a neutral meson to its anti-meson. In this work, we study the coupling of neutral B with neutral anti-B meson, which induces an oscillation of frequency _md we can measure accurately. Besides the interest of this phenomenon itself, this measurement plays an important role in the quest for the origin of CP violation. The standard model of electro-weak interactions does not include CP violation in a fully satisfactory way. The search for yet unexplained physical phenomena is, therefore, the main motivation of the Belle collaboration. Many measurements of _md have previously been performed. The present work, however, leads to a precision on _md that was never reached before. This is the result of the excellent performance of KEKB, and of an original approach that allows to considerably reduce background contamination of pertinent events. This approach was already successfully used by other collaborations, in slightly different conditions as here. The method we employed consists in the partial reconstruction of one of the B mesons through the decay channel ___D*(D0_)l_l, where only the information on the lepton l and the pion _ are used. The information on the other B meson of the initial Banti-B pair is extracted from a single high-energy lepton. The available sample of Banti-B pairs thus does not suffer from large reductions due to complete reconstruction, nor does it suffer of high charged B meson background, as in inclusive analyses. We finally obtain the following result: _md = 0.513±0.006±0.008 ps^-1, where the first error is statistical, and the second, systematical.<br/><br/>De quoi la matière est-elle constituée ? Comment tient-elle ensemble ? Ce sont là les questions auxquelles la recherche en physique des hautes énergies tente de répondre. Cette recherche est conduite à deux niveaux en constante interaction. D?une part, des modèles théoriques sont élaborés pour tenter de comprendre et de décrire les observations. Ces dernières, d?autre part, sont réalisées au moyen de collisions à haute énergie de particules élémentaires. C?est ainsi que l?on a pu mettre en évidence l?existence de quatre forces fondamentales et de 24 constituants élémentaires, classés en « quarks » et « leptons ». Il s?agit là de l?une des plus belles réussites du modèle en usage aujourd?hui, appelé « Modèle Standard ». Il est une observation fondamentale que le Modèle Standard peine cependant à expliquer, c?est la disparition quasi complète de l?anti-matière (le « négatif » de la matière). Au niveau fondamental, cela doit correspondre à une asymétrie entre particules (constituants de la matière) et antiparticules (constituants de l?anti-matière). On l?appelle l?asymétrie (ou violation) CP. Bien qu?incluse dans le Modèle Standard, cette asymétrie n?est que partiellement prise en compte, semble-t-il. En outre, son origine est inconnue. D?intenses recherches sont donc aujourd?hui entreprises pour mettre en lumière cette asymétrie. L?expérience Belle, au Japon, en est une des pionnières. Belle étudie en effet les phénomènes physiques liés à une famille de particules appelées les « mésons B », dont on sait qu?elles sont liées de près à l?asymétrie CP. C?est dans le cadre de cette recherche que se place cette thèse. Nous avons étudié une propriété remarquable du méson B neutre : l?oscillation de ce méson avec son anti-méson. Cette particule est de se désintégrer pour donner l?antiparticule associée. Il est clair que cette oscillation est rattachée à l?asymétrie CP. Nous avons ici déterminé avec une précision encore inégalée la fréquence de cette oscillation. La méthode utilisée consiste à caractériser une paire de mésons B à l?aide de leur désintégration comprenant un lepton chacun. Une plus grande précision est obtenue en recherchant également une particule appelée le pion, et qui provient de la désintégration d?un des mésons. Outre l?intérêt de ce phénomène oscillatoire en lui-même, cette mesure permet d?affiner, directement ou indirectement, le Modèle Standard. Elle pourra aussi, à terme, aider à élucider le mystère de l?asymétrie entre matière et anti-matière.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freezing point depressions (¿Tf) of dilute solutions of several alkali metal chlorides and bromides were calculated by means of the best activity coefficient equations. In the calculations, Hückel, Hamer and Pitzer equationswere used for activity coefficients. The experimental ¿Tf values available in the literature for dilute LiCl, NaCl and KBr solutions can be predicted within experimental error by the Hückel equations used. The experimental ¿Tf values for dilute LiCl and KBr solutions can also be accurately calculated by corresponding Pitzer equations and those for dilute NaCl solutions by the Hamer equation for this salt. Neither Hamer nor Pitzer equations predict accurately the freezing points reported in the literature for LiBr and NaBr solutions. The ¿Tf values available for dilute solutions of RbCl, CsCl or CsBr are not known at the moment accurately because the existing data for these solutions are not precise. The freezing point depressions are tabulated in the present study for LiCl, NaCl and KBr solutions at several rounded molalities. The ¿Tf values in this table can be highly recommended. The activity coefficient equations used in the calculation of these values have been tested with almost allhigh-precision electrochemical data measured at 298.15 K.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The geometric characterisation of tree orchards is a high-precision activity comprising the accurate measurement and knowledge of the geometry and structure of the trees. Different types of sensors can be used to perform this characterisation. In this work a terrestrial LIDAR sensor (SICK LMS200) whose emission source was a 905-nm pulsed laser diode was used. Given the known dimensions of the laser beam cross-section (with diameters ranging from 12 mm at the point of emission to 47.2 mm at a distance of 8 m), and the known dimensions of the elements that make up the crops under study (flowers, leaves, fruits, branches, trunks), it was anticipated that, for much of the time, the laser beam would only partially hit a foreground target/object, with the consequent problem of mixed pixels or edge effects. Understanding what happens in such situations was the principal objective of this work. With this in mind, a series of tests were set up to determine the geometry of the emitted beam and to determine the response of the sensor to different beam blockage scenarios. The main conclusions that were drawn from the results obtained were: (i) in a partial beam blockage scenario, the distance value given by the sensor depends more on the blocked radiant power than on the blocked surface area; (ii) there is an area that influences the measurements obtained that is dependent on the percentage of blockage and which ranges from 1.5 to 2.5 m with respect to the foreground target/object. If the laser beam impacts on a second target/object located within this range, this will affect the measurement given by the sensor. To interpret the information obtained from the point clouds provided by the LIDAR sensors, such as the volume occupied and the enclosing area, it is necessary to know the resolution and the process for obtaining this mesh of points and also to be aware of the problem associated with mixed pixels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electronic canopy characterization is an important issue in tree crop management. Ultrasonic and optical sensors are the most used for this purpose. The objective of this work was to assess the performance of an ultrasonic sensor under laboratory and field conditions in order to provide reliable estimations of distance measurements to apple tree canopies. To this purpose, a methodology has been designed to analyze sensor performance in relation to foliage ranging and to interferences with adjacent sensors when working simultaneously. Results show that the average error in distance measurement using the ultrasonic sensor in laboratory conditions is ±0.53 cm. However, the increase of variability in field conditions reduces the accuracy of this kind of sensors when estimating distances to canopies. The average error in such situations is ±5.11 cm. When analyzing interferences of adjacent sensors 30 cm apart, the average error is ±17.46 cm. When sensors are separated 60 cm, the average error is ±9.29 cm. The ultrasonic sensor tested has been proven to be suitable to estimate distances to the canopy in field conditions when sensors are 60 cm apart or more and could, therefore, be used in a system to estimate structural canopy parameters in precision horticulture.