79 resultados para Verifications
Resumo:
Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.
Resumo:
The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^
Resumo:
With continuous new improvements in brachytherapy source designs and techniques, method of 3D dosimetry for treatment dose verifications would better ensure accurate patient radiotherapy treatment. This study was aimed to first evaluate the 3D dose distributions of the low-dose rate (LDR) Amersham 6711 OncoseedTM using PRESAGE® dosimeters to establish PRESAGE® as a suitable brachytherapy dosimeter. The new AgX100 125I seed model (Theragenics Corporation) was then characterized using PRESAGE® following the TG-43 protocol. PRESAGE® dosimeters are solid, polyurethane-based, 3D dosimeters doped with radiochromic leuco dyes that produce a linear optical density response to radiation dose. For this project, the radiochromic response in PRESAGE® was captured using optical-CT scanning (632 nm) and the final 3D dose matrix was reconstructed using the MATLAB software. An Amersham 6711 seed with an air-kerma strength of approximately 9 U was used to irradiate two dosimeters to 2 Gy and 11 Gy at 1 cm to evaluate dose rates in the r=1 cm to r=5 cm region. The dosimetry parameters were compared to the values published in the updated AAPM Report No. 51 (TG-43U1). An AgX100 seed with an air-kerma strength of about 6 U was used to irradiate two dosimeters to 3.6 Gy and 12.5 Gy at 1 cm. The dosimetry parameters for the AgX100 were compared to the values measured from previous Monte-Carlo and experimental studies. In general, the measured dose rate constant, anisotropy function, and radial dose function for the Amersham 6711 showed agreements better than 5% compared to consensus values in the r=1 to r=3 cm region. The dose rates and radial dose functions measured for the AgX100 agreed with the MCNPX and TLD-measured values within 3% in the r=1 to r=3 cm region. The measured anisotropy function in PRESAGE® showed relative differences of up to 9% with the MCNPX calculated values. It was determined that post-irradiation optical density change over several days was non-linear in different dose regions, and therefore the dose values in the r=4 to r=5 cm regions had higher uncertainty due to this effect. This study demonstrated that within the radial distance of 3 cm, brachytherapy dosimetry in PRESAGE® can be accurate within 5% as long as irradiation times are within 48 hours.
Resumo:
Stubacher Sonnblickkees (SSK) is located in the Hohe Tauern Range (Eastern Alps) in the south of Salzburg Province (Austria) in the region of Oberpinzgau in the upper Stubach Valley. The glacier is situated at the main Alpine crest and faces east, starting at elevations close to 3050 m and in the 1980s terminated at 2500 m a.s.l. It had an area of 1.7 km² at that time, compared with 1 km² in 2013. The glacier type can be classified as a slope glacier, i.e. the relief is covered by a relatively thin ice sheet and there is no regular glacier tongue. The rough subglacial topography makes for a complex shape in the surface topography, with various concave and convex patterns. The main reason for selecting this glacier for mass balance observations (as early as 1963) was to verify on a complex glacier how the mass balance methods and the conclusions - derived during the more or less pioneer phase of glaciological investigations in the 1950s and 1960s - could be applied to the SSK glacier. The decision was influenced by the fact that close to the SSK there was the Rudolfshütte, a hostel of the Austrian Alpine Club (OeAV), newly constructed in the 1950s to replace the old hut dating from 1874. The new Alpenhotel Rudolfshütte, which was run by the Slupetzky family from 1958 to 1970, was the base station for the long-term observation; the cable car to Rudolfshütte, operated by the Austrian Federal Railways (ÖBB), was a logistic advantage. Another factor for choosing SSK as a glaciological research site was the availability of discharge records of the catchment area from the Austrian Federal Railways who had turned the nearby lake Weißsee ('White Lake') - a former natural lake - into a reservoir for their hydroelectric power plants. In terms of regional climatic differences between the Central Alps in Tyrol and those of the Hohe Tauern, the latter experienced significantly higher precipitation , so one could expect new insights in the different response of the two glaciers SSK and Hintereisferner (Ötztal Alps) - where a mass balance series went back to 1952. In 1966 another mass balance series with an additional focus on runoff recordings was initiated at Vernagtfener, near Hintereisferner, by the Commission of the Bavarian Academy of Sciences in Munich. The usual and necessary link to climate and climate change was given by a newly founded weather station (by Heinz and Werner Slupetzky) at the Rudolfshütte in 1961, which ran until 1967. Along with an extension and enlargement to the so-called Alpine Center Rudolfshütte of the OeAV, a climate observatory (suggested by Heinz Slupetzky) has been operating without interruption since 1980 under the responsibility of ZAMG and the Hydrological Service of Salzburg, providing long-term met observations. The weather station is supported by the Berghotel Rudolfshütte (in 2004 the OeAV sold the hotel to a private owner) with accommodation and facilities. Direct yearly mass balance measurements were started in 1963, first for 3 years as part of a thesis project. In 1965 the project was incorporated into the Austrian glacier measurement sites within the International Hydrological Decade (IHD) 1965 - 1974 and was afterwards extended via the International Hydrological Program (IHP) 1975 - 1981. During both periods the main financial support came from the Hydrological Survey of Austria. After 1981 funds were provided by the Hydrological Service of the Federal Government of Salzburg. The research was conducted from 1965 onwards by Heinz Slupetzky from the (former) Department of Geography of the University of Salzburg. These activities received better recognition when the High Alpine Research Station of the University of Salzburg was founded in 1982 and brought in additional funding from the University. With recent changes concerning Rudolfshütte, however, it became unfeasible to keep the research station going. Fortunately, at least the weather station at Rudolfshütte is still operating. In the pioneer years of the mass balance recordings at SSK, the main goal was to understand the influence of the complicated topography on the ablation and accumulation processes. With frequent strong southerly winds (foehn) on the one hand, and precipitation coming in with storms from the north to northwest, the snow drift is an important factor on the undulating glacier surface. This results in less snow cover in convex zones and in more or a maximum accumulation in concave or flat areas. As a consequence of the accentuated topography, certain characteristic ablation and accumulation patterns can be observed during the summer season every year, which have been regularly observed for many decades . The process of snow depletion (Ausaperung) runs through a series of stages (described by the AAR) every year. The sequence of stages until the end of the ablation season depends on the weather conditions in a balance year. One needs a strong negative mass balance year at the beginning of glacier measurements to find out the regularities; 1965, the second year of observation resulted in a very positive mass balance with very little ablation but heavy accumulation. To date it is the year with the absolute maximum positive balance in the entire mass balance series since 1959, probably since 1950. The highly complex ablation patterns required a high number of ablation stakes at the beginning of the research and it took several years to develop a clearer idea of the necessary density of measurement points to ensure high accuracy. A great number of snow pits and probing profiles (and additional measurements at crevasses) were necessary to map the accumulation area/patterns. Mapping the snow depletion, especially at the end of the ablation season, which coincides with the equilibrium line, is one of the main basic data for drawing contour lines of mass balance and to calculate the total mass balance (on a regular-shaped valley glacier there might be an equilibrium line following a contour line of elevation separating the accumulation area and the ablation area, but not at SSK). - An example: in 1969/70, 54 ablation stakes and 22 snow pits were used on the 1.77 km² glacier surface. In the course of the study the consistency of the accumulation and ablation patterns could be used to reduce the number of measurement points. - At the SSK the stratigraphic system, i.e. the natural balance year, is used instead the usual hydrological year. From 1964 to 1981, the yearly mass balance was calculated by direct measurements. Based on these records of 17 years, a regression analysis between the specific net mass balance and the ratio of ablation area to total area (AAR) has been used since then. The basic requirement was mapping the maximum snow depletion at the end of each balance year. There was the advantage of Heinz Slupetzky's detailed local and long-term experience, which ensured homogeneity of the series on individual influences of the mass balance calculations. Verifications took place as often as possible by means of independent geodetic methods, i.e. monoplotting , aerial and terrestrial photogrammetry, more recently also the application of PHOTOMODELLER and laser scans. The semi-direct mass balance determinations used at SSK were tentatively compared with data from periods of mass/volume change, resulting in promising first results on the reliability of the method. In recent years re-analyses of the mass balance series have been conducted by the World Glacier Monitoring Service and will be done at SSK too. - The methods developed at SSK also add to another objective, much discussed in the 1960s within the community, namely to achieve time- and labour-saving methods to ensure continuation of long-term mass balance series. The regression relations were used to extrapolate the mass balance series back to 1959, the maximum depletion could be reconstructed by means of photographs for those years. R. Günther (1982) calculated the mass balance series of SSK back to 1950 by analysing the correlation between meteorological data and the mass balance; he found a high statistical relation between measured and determined mass balance figures for SSK. In spite of the complex glacier topography, interesting empirical experiences were gained from the mass balance data sets, giving a better understanding of the characteristics of the glacier type, mass balance and mass exchange. It turned out that there are distinct relations between the specific net balance, net accumulation (defined as Bc/S) and net ablation (Ba/S) to the AAR, resulting in characteristic so-called 'turnover curves'. The diagram of SSK represents the type of a glacier without a glacier tongue. Between 1964 and 1966, a basic method was developed, starting from the idea that instead of measuring years to cover the range between extreme positive and extreme negative yearly balances one could record the AAR/snow depletion/Ausaperung during one or two summers. The new method was applied on Cathedral Massif Glacier, a cirque glacier with the same area as the Stubacher Sonnblickkees, in British Columbia, Canada. during the summers of 1977 and 1978. It returned exactly the expected relations, e.g. mass turnover curves, as found on SSK. The SSK was mapped several times on a scale of 1:5000 to 1:10000. Length variations have been measured since 1960 within the OeAV glacier length measurement programme. Between 1965 and 1981, there was a mass gain of 10 million cubic metres. With a time lag of 10 years, this resulted in an advance until the mid-1980s. Since 1982 there has been a distinct mass loss of 35 million cubic metres by 2013. In recent years, the glacier has disintegrated faster, forced by the formation of a periglacial lake at the glacier terminus and also by the outcrops of rocks (typical for the slope glacier type), which have accelerated the meltdown. The formation of this lake is well documented. The glacier has retreated by some 600 m since 1981. - Since August 2002, a runoff gauge installed by the Hydrographical Service of Salzburg has recorded the discharge of the main part of SSK at the outlet of the new Unterer Eisboden See. The annual reports - submitted from 1982 on as a contractual obligation to the Hydrological Service of Salzburg - document the ongoing processes on the one hand, and emphasize the mass balance of SSK and outline the climatological reasons, mainly based on the met-data of the observatory Rudolfshütte, on the other. There is an additional focus on estimating the annual water balance in the catchment area of the lake. There are certain preconditions for the water balance equation in the area. Runoff is recorded by the ÖBB power stations, the mass balance of the now approx. 20% glaciated area (mainly the Sonnblickkees) is measured andthe change of the snow and firn patches/the water content is estimated as well as possible. (Nowadays laserscanning and ground radar are available to measure the snow pack). There is a net of three precipitation gauges plus the recordings at Rudolfshütte. The evaporation is of minor importance. The long-term annual mean runoff depth in the catchment area is around 3.000 mm/year. The precipitation gauges have measured deficits between 10% and 35%, on average probably 25% to 30%. That means that the real precipitation in the catchment area Weißsee (at elevations between 2,250 and 3,000 m) is in an order of 3,200 to 3,400 mm a year. The mass balance record of SSK was the first one established in the Hohe Tauern region (and now since the Hohe Tauern National Park was founded in 1983 in Salzburg) and is one of the longest measurement series worldwide. Great efforts are under way to continue the series, to safeguard against interruption and to guarantee a long-term monitoring of the mass balance and volume change of SSK (until the glacier is completely gone, which seems to be realistic in the near future as a result of the ongoing global warming). Heinz Slupetzky, March 2014
Resumo:
In this work, a unified algorithm-architecture-circuit co-design environment for complex FPGA system development is presented. The main objective is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in verification stage, so as to speed up the development period. A proposed high performance FFT/iFFT processor for Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB) system design process is given as an example to demonstrate the proposed methodology. This efficient design methodology is tested and considered to be suitable for almost all types of complex FPGA system designs and verifications.
Resumo:
RESUMEN: La realización de túneles de gran longitud para ferrocarriles ha adquirido un gran auge en los últimos años. En España se han abordado proyectos de estas características, no existiendo para su ejecución una metodología completa y contrastada de actuación. Las características geométricas, de observación y de trabajo en túneles hace que las metodologías que se aplican en otros proyectos de ingeniería no sean aplicables por las siguientes causas: separación de las redes exteriores e interiores de los túneles debido a la diferente naturaleza de los observables, geometría en el interior siempre desfavorable a los requerimientos de observación clásica, mala visibilidad dentro del túnel, aumento de errores conforme avanza la perforación, y movimientos propios del túnel durante su ejecución por la propia geodinámica activa. Los patrones de observación geodésica usados deben revisarse cuando se ejecutan túneles de gran longitud. Este trabajo establece una metodología para el diseño de redes exteriores. ABSTRACT: The realization of long railway tunnels has acquired a great interest in recent years. In Spain it is necessary to address projects of this nature, but ther is no corresponding methodological framework supporting them. The tunnel observational and working geometrical properties, make that former methodologies used may be unuseful in this case: the observation of the exterior and interior geodetical networks of the tunnel is different in nature. Conditions of visibility in the interior of the tunnels, regardless of the geometry, are not the most advantageous for observation due to the production system and the natural conditions of the tunnels. Errors increase as the drilling of the tunnel progresses, as it becomes problematical to perform continuous verifications along the itinerary itself. Moreover, inherent tunnel movements due to active geodynamics must also be considered. Therefore patterns for geodetic and topographic observations have to be reviewed when very long tunnels are constructed.
Resumo:
Conventional dual-rail precharge logic suffers from difficult implementations of dual-rail structure for obtaining strict compensation between the counterpart rails. As a light-weight and high-speed dual-rail style, balanced cell-based dual-rail logic (BCDL) uses synchronised compound gates with global precharge signal to provide high resistance against differential power or electromagnetic analyses. BCDL can be realised from generic field programmable gate array (FPGA) design flows with constraints. However, routings still exist as concerns because of the deficient flexibility on routing control, which unfavourably results in bias between complementary nets in security-sensitive parts. In this article, based on a routing repair technique, novel verifications towards routing effect are presented. An 8 bit simplified advanced encryption processing (AES)-co-processor is executed that is constructed on block random access memory (RAM)-based BCDL in Xilinx Virtex-5 FPGAs. Since imbalanced routing are major defects in BCDL, the authors can rule out other influences and fairly quantify the security variants. A series of asymptotic correlation electromagnetic (EM) analyses are launched towards a group of circuits with consecutive routing schemes to be able to verify routing impact on side channel analyses. After repairing the non-identical routings, Mutual information analyses are executed to further validate the concrete security increase obtained from identical routing pairs in BCDL.
Resumo:
La verificación de la seguridad estructural, tanto de estructuras que permitan un cierto grado de deterioro en su dimensionado como de estructuras existentes deterioradas, necesita disponer de modelos de resistencia que tengan en cuenta los efectos del deterioro. En el caso de la corrosión de las armaduras en las estructuras de hormigón armado, la resistencia depende de múltiples factores tales como la sección del acero corroído, el diagrama tensión-deformación del acero corroído, la adherencia hormigón-acero corroído, la fisuración o desprendimiento del hormigón debido a la expansión de los productos de corrosión. En este sentido, la transferencia de las fuerzas a través de la superficie de contacto entre el hormigón y el acero, la adherencia, es uno de los aspectos más importantes a considerar y es la base del comportamiento del hormigón armado como elemento estructural. La adherencia debe asegurar el anclaje de las armaduras y transmitir la tensión tangencial que aparece en las mismas como consecuencia de la variación de las solicitaciones a lo largo de un elemento estructural. Como consecuencia de la corrosión de las armaduras, el desarrollo de la adherencia se altera y, por tanto, la transferencia de la tensión longitudinal. Esta Tesis Doctoral aborda el comportamiento en estado límite último de la adherencia en el hormigón estructural con armaduras corroídas. El objetivo principal es la obtención de un modelo suficientemente realista y fiable para la evaluación de la adherencia con armaduras corroídas en el marco de la verificación de la seguridad estructural de elementos de hormigón armado con armaduras corroídas. Para ello se ha llevado a cabo un programa experimental de ensayos tipo pull-out excéntricos, con diferentes probetas, unas sin corrosión y otras sometidas tanto a procesos de corrosión natural como a procesos de corrosión acelerada, con diferentes grados de deterioro. Este tipo de ensayo de adherencia representa de forma realista y fiable realista los esfuerzos de adherencia en la zona de anclaje. Por otra parte, para la realización de estos ensayos se ha puesto a punto, además del procedimiento de ensayo, un sistema de adquisición de datos entre los que se incluye el empleo de sensores de tipo fibra óptica con redes de Bragg embebidos en la armadura para determinar los parámetros representativos de la adherencia en el hormigón estructural con armaduras corroídas. Por otra parte, la recopilación de los datos de los estudios de adherencia con armaduras corroídas procedentes de la literatura científica, además de los resultados de la presente investigación, junto con la identificación de las variables relevantes en el comportamiento de la adherencia con armaduras sanas y corroídas ha servido para la obtención de una formulación realista y fiable para la evaluación conjunta de la adherencia con armaduras sanas y corroídas a partir de modelos de regresión múltiple. La formulación propuesta ha sido validada mediante criterios estadísticos y comparada con otras formulaciones propuestas en la literatura científica. Además se ha realizado un análisis de las variables influyentes de la formulación propuesta. También se ha obtenido un modelo numérico simple y eficiente, validado con alguno de los ensayos realizados en esta tesis, para simular la adherencia con armaduras sanas y corroídas. Finalmente, se presenta un procedimiento para realizar la evaluación de vigas deterioradas por corrosión mediante el método de los campos de tensiones que incluye la evaluación de la adherencia mediante la formulación sugerida en esta Tesis Doctoral. Las conclusiones alcanzadas en este trabajo han permitido evaluar la adherencia con armaduras corroídas de forma realista y fiable. Asimismo, se ha podido incluir la evaluación de la adherencia en el marco de la verificación de la seguridad estructural en elementos de hormigón armado deteriorados por corrosión. ABSTRACT Structural safety verification of both structures allowing a certain degree of deterioration in design and deteriorated existing structures needs strength models that factor in the effects of deterioration. In case of corrosion of steel bars in reinforced concrete structures, the resistance depends on many things such as the remaining cross-section of the corroded reinforcement bars, the stress-strain diagrams of steel, the concrete-reinforcement bond and corrosion-induced concrete cracking or spalling. Accordingly, the force transfer through the contact surface between concrete and reinforcement, bond, is one of the most important aspects to consider and it is the basis of the structural performance of reinforced concrete. Bond must assure anchorage of reinforcement and transmit shear stresses as a consequence of the different stresses along a structural element As a consequence of corrosion, the bond development may be affected and hence the transfer of longitudinal stresses. This PhD Thesis deals with ultimate limit state bond behaviour in structural concrete with corrode steel bars. The main objective is to obtain a realistic and reliable model for the assessment of bond within the context of structural safety verifications of reinforced concrete members with corroded steel bars. In that context, an experimental programme of eccentric pull-out tests were conducted, with different specimens, ones without corrosion and others subjected to accelerated or natural corrosion with different corrosion degrees. This type of bond test reproduces in a realistic and reliable way bond stresses in the anchorage zone. Moreover, for conducting these tests it was necessary to develop both a test procedure and also a data acquisition system including the use of an embedded fibre-optic sensing system with fibre Bragg grating sensors to obtain the representative parameters of bond strength in structural concrete with corroded steel bars. Furthermore, the compilation of data from bond studies with corroded steel bars from scientific literature, including tests conducted in the present study, along with the identification of the relevant variables influencing bond behaviour for both corroded and non-corroded steel bars was used to obtain a realistic and reliable formulation for bond assessment in corroded and non-corroded steel bars by multiple linear regression analysis. The proposed formulation was validated with a number of statistical criteria and compared to other models from scientific literature. Moreover, an analysis of the influencing variables of the proposed formulation has been performed. Also, a simplified and efficient numerical model has been obtained and validated with several tests performed in this PhD Thesis for simulating the bond in corroded and non-corroded steel bars. Finally, a proposal for the assessment of corrosion-damaged beams with stress field models including bond assessment with the proposed formulation is presented. The conclusions raised in this work have allowed a realistic and reliable bond assessment in corroded steel bars. Furthermore, bond assessment has been included within the context of structural safety verifications in corrosion-damaged reinforced concrete elements.
Resumo:
RESUMEN La realización de túneles de gran longitud para ferrocarriles ha adquirido un gran auge en los últimos años. En España se han abordado proyectos de estas características, no existiendo para su ejecución una metodología completa y contrastada de actuación. Las características geométricas, de observación y de trabajo en túneles hace que las metodologías que se aplican en otros proyectos de ingeniería no sean aplicables por las siguientes causas: separación de las redes exteriores e interiores de los túneles debido a la diferente naturaleza de los observables, geometría en el interior siempre desfavorable a los requerimientos de observación clásica, mala visibilidad dentro del túnel, aumento de errores conforme avanza la perforación, y movimientos propios del túnel durante su ejecución por la propia geodinámica activa. Los patrones de observación geodésica usados deben revisarse cuando se ejecutan túneles de gran longitud. Este trabajo establece una metodología para el diseño de redes exteriores. ABSTRACT: The realization of long railway tunnels has acquired a great interest in recent years. In Spain it is necessary to address projects of this nature, but ther is no corresponding methodological framework supporting them. The tunnel observational and working geometrical properties, make that former methodologies used may be unuseful in this case: the observation of the exterior and interior geodetical networks of the tunnel is different in nature. Conditions of visibility in the interior of the tunnels, regardless of the geometry, are not the most advantageous for observation due to the production system and the natural conditions of the tunnels. Errors increase as the drilling of the tunnel progresses, as it becomes problematical to perform continuous verifications along the itinerary itself. Moreover, inherent tunnel movements due to active geodynamics must also be considered. Therefore patterns for geodetic and topographic observations have to be reviewed when very long tunnels are constructed.
Resumo:
Over four hundred years ago, Sir Walter Raleigh asked his mathematical assistant to find formulas for the number of cannonballs in regularly stacked piles. These investigations aroused the curiosity of the astronomer Johannes Kepler and led to a problem that has gone centuries without a solution: why is the familiar cannonball stack the most efficient arrangement possible? Here we discuss the solution that Hales found in 1998. Almost every part of the 282-page proof relies on long computer verifications. Random matrix theory was developed by physicists to describe the spectra of complex nuclei. In particular, the statistical fluctuations of the eigenvalues (“the energy levels”) follow certain universal laws based on symmetry types. We describe these and then discuss the remarkable appearance of these laws for zeros of the Riemann zeta function (which is the generating function for prime numbers and is the last special function from the last century that is not understood today.) Explaining this phenomenon is a central problem. These topics are distinct, so we present them separately with their own introductory remarks.
Resumo:
Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.
Resumo:
This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.
Resumo:
This layer is a georeferenced raster image of the historic paper map entitled: Seattle Harbor : Puget Sound Washington territory, issued May 1870 C.P. Patterson, superintendant; verification J.E. Hilgard, assistant in charge of the office; triangulation by J. S. Lawson assistant in 1874 based upon the primary triangulation by George Davidson, assistant in 1855-6; topography and hydrography by J.S. Lawson, assistant in 1874 & 5; resurvey of city of Seattle and water front by assist. J.J. Gilbert in 1886; additions by asst. Pratt in 1889; verifications of hydrology by Lieut. Comdr. W. H. Brownson U.S.N. inspector of hydrography. It was published by United States Coast and Geodetic Survey in July 1889. Scale 1:20,000. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Washington State Plane North Coordinate System HARN NAD83 (in Feet) (Fipszone 4601). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows coastal features such as lighthouses, rocks, channels, points, coves, islands, bottom soil types, flats, wharves, and more. Includes also selected land features such as roads, railroads, drainage, land cover, selected buildings, towns, and more. Relief shown by contours and spot heights; depths by soundings. Includes notes, tables, and list of authorities. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.
Resumo:
This thesis described the research carried out on the development of a novel hardwired tactile sensing system tailored for the application of a next generation of surgical robotic and clinical devices, namely a steerable endoscope with tactile feedback, and a surface plate for patient posture and balance. Two case studies are examined. The first is a one-dimensional sensor for the steerable endoscope retrieving shape and ‘touch’ information. The second is a two-dimensional surface which interprets the three-dimensional motion of a contacting moving load. This research can be used to retrieve information from a distributive tactile sensing surface of a different configuration, and can interpret dynamic and static disturbances. This novel approach to sensing has the potential to discriminate contact and palpation in minimal invasive surgery (MIS) tools, and posture and balance in patients. The hardwired technology uses an embedded system based on Field Programmable Gate Arrays (FPGA) as the platform to perform the sensory signal processing part in real time. High speed robust operation is an advantage from this system leading to versatile application involving dynamic real time interpretation as described in this research. In this research the sensory signal processing uses neural networks to derive information from input pattern from the contacting surface. Three neural network architectures namely single, multiple and cascaded were introduced in an attempt to find the optimum solution for discrimination of the contacting outputs. These architectures were modelled and implemented into the FPGA. With the recent introduction of modern digital design flows and synthesis tools that essentially take a high-level sensory processing behaviour specification for a design, fast prototyping of the neural network function can be achieved easily. This thesis outlines the challenge of the implementations and verifications of the performances.
Resumo:
COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.