990 resultados para Improved line tracker
Resumo:
An analysis of large deformations of flexible membrane structures within the tension field theory is considered. A modification-of the finite element procedure by Roddeman et al. (Roddeman, D. G., Drukker J., Oomens, C. W J., Janssen, J. D., 1987, ASME J. Appl. Mech. 54, pp. 884-892) is proposed to study the wrinkling behavior of a membrane element. The state of stress in the element is determined through a modified deformation gradient corresponding to a fictive nonwrinkled surface. The new model uses a continuously modified deformation gradient to capture the location orientation of wrinkles more precisely. It is argued that the fictive nonwrinkled surface may be looked upon as an everywhere-taut surface in the limit as the minor (tensile) principal stresses over the wrinkled portions go to zero. Accordingly, the modified deformation gradient is thought of as the limit of a sequence of everywhere-differentiable tensors. Under dynamic excitations, the governing equations are weakly projected to arrive at a system of nonlinear ordinary differential equations that is solved using different integration schemes. It is concluded that, implicit integrators work much better than explicit ones in the present context.
Resumo:
Improved forecasting of urban rail patronage is essential for effective policy development and efficient planning for new rail infrastructure. Past modelling and forecasting of urban rail patronage has been based on legacy modelling approaches and often conducted at the general level of public transport demand, rather than being specific to urban rail. This project canvassed current Australian practice and international best practice to develop and estimate time series and cross-sectional models of rail patronage for Australian mainland state capital cities. This involved the implementation of a large online survey of rail riders and non-riders for each of the state capital cities, thereby resulting in a comprehensive database of respondent socio-economic profiles, travel experience, attitudes to rail and other modes of travel, together with stated preference responses to a wide range of urban travel scenarios. Estimation of the models provided a demonstration of their ability to provide information on the major influences on the urban rail travel decision. Rail fares, congestion and rail service supply all have a strong influence on rail patronage, while a number of less significant factors such as fuel price and access to a motor vehicle are also influential. Of note, too, is the relative homogeneity of rail user profiles across the state capitals. Rail users tended to have higher incomes and education levels. They are also younger and more likely to be in full-time employment than non-rail users. The project analysis reported here represents only a small proportion of what could be accomplished utilising the survey database. More comprehensive investigation was beyond the scope of the project and has been left for future work.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
Enhanced Scan design can significantly improve the fault coverage for two pattern delay tests at the cost of exorbitantly high area overhead. The redundant flip-flops introduced in the scan chains have traditionally only been used to launch the two-pattern delay test inputs, not to capture tests results. This paper presents a new, much lower cost partial Enhanced Scan methodology with both improved controllability and observability. Facilitating observation of some hard to observe internal nodes by capturing their response in the already available and underutilized redundant flip-flops improves delay fault coverage with minimal or almost negligible cost. Experimental results on ISCAS'89 benchmark circuits show significant improvement in TDF fault coverage for this new partial enhance scan methodology.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
The Transition Radiation Tracker (TRT) of the ATLAS experiment at the LHC is part of the Inner Detector. It is designed as a robust and powerful gaseous detector that provides tracking through individual drift-tubes (straws) as well as particle identification via transition radiation (TR) detection. The straw tubes are operated with Xe-CO2-O2 70/27/3, a gas that combines the advantages of efficient TR absorption, a short electron drift time and minimum ageing effects. The modules of the barrel part of the TRT were built in the United States while the end-cap wheels are assembled at two Russian institutes. Acceptance tests of barrel modules and end-cap wheels are performed at CERN before assembly and integration with the Semiconductor Tracker (SCT) and the Pixel Detector. This thesis first describes simulations the TRT straw tube. The argon-based acceptance gas mixture as well as two xenon-based operating gases are examined for its properties. Drift velocities and Townsend coefficients are computed with the help of the program Magboltz and used to study electron drift and multiplication in the straw using the software Garfield. The inclusion of Penning transfers in the avalanche process leads to remarkable agreements with experimental data. A high level of cleanliness in the TRT s acceptance test gas system is indispensable. To monitor gas purity, a small straw tube detector has been constructed and extensively used to study the ageing behaviour of the straw tube in Ar-CO2. A variety of ageing tests are presented and discussed. Acceptance tests for the TRT survey dimensions, wire tension, gas-tightness, high-voltage stability and gas gain uniformity along each individual straw. The thesis gives details on acceptance criteria and measurement methods in the case of the end-cap wheels. Special focus is put on wire tension and straw straightness. The effect of geometrically deformed straws on gas gain and energy resolution is examined in an experimental setup and compared to simulation studies. An overview of the most important results from the end-cap wheels tested up to this point is presented.
Resumo:
Boron neutron capture therapy (BNCT) is a form of chemically targeted radiotherapy that utilises the high neutron capture cross-section of boron-10 isotope to achieve a preferential dose increase in the tumour. The BNCT dosimetry poses a special challenge as the radiation dose absorbed by the irradiated tissues consists of several dose different components. Dosimetry is important as the effect of the radiation on the tissue is correlated with the radiation dose. Consistent and reliable radiation dose delivery and dosimetry are thus basic requirements for radiotherapy. The international recommendations for are not directly applicable to BNCT dosimetry. The existing dosimetry guidance for BNCT provides recommendations but also calls for investigating for complementary methods for comparison and improved accuracy. In this thesis the quality assurance and stability measurements of the neutron beam monitors used in dose delivery are presented. The beam monitors were found not to be affected by the presence of a phantom in the beam and that the effect of the reactor core power distribution was less than 1%. The weekly stability test with activation detectors has been generally reproducible within the recommended tolerance value of 2%. An established toolkit for epithermal neutron beams for determination of the dose components is presented and applied in an international dosimetric intercomparison. The measured quantities (neutron flux, fast neutron and photon dose) by the groups in the intercomparison were generally in agreement within the stated uncertainties. However, the uncertainties were large, ranging from 3-30% (1 standard deviation), emphasising the importance of dosimetric intercomparisons if clinical data is to be compared between different centers. Measurements with the Exradin type 2M ionisation chamber have been repeated in the epithermal neutron beam in the same measurement configuration over the course of 10 years. The presented results exclude severe sensitivity changes to thermal neutrons that have been reported for this type of chamber. Microdosimetry and polymer gel dosimetry as complementary methods for epithermal neutron beam dosimetry are studied. For microdosimetry the comparison of results with ionisation chambers and computer simulation showed that the photon dose measured with microdosimetry was lower than with the two other methods. The disagreement was within the uncertainties. For neutron dose the simulation and microdosimetry results agreed within 10% while the ionisation chamber technique gave 10-30% lower neutron dose rates than the two other methods. The response of the BANG-3 gel was found to be linear for both photon and epithermal neutron beam irradiation. The dose distribution normalised to dose maximum measured by MAGIC polymer gel was found to agree well with the simulated result near the dose maximum while the spatial difference between measured and simulated 30% isodose line was more than 1 cm. In both the BANG-3 and MAGIC gel studies, the interpretation of the results was complicated by the presence of high-LET radiation.
Resumo:
Dehydroamino acids are important precursors for the synthesis of a number of unnatural amino acids and are structural components in many biologically active peptide derivatives. However, efficient synthetic procedures for their production in large amounts and without side reactions are limited. We report here an improved procedure for the synthesis of dehydroalanine and dehydroamino butyric acid from the carbonate derivatives of serine and threonine using TBAF. The antiselective E-2 elimination of the carbonate derivatives of serine and threonine using TBAF is milder and more efficient than other available procedures. The elimination reaction is completed in less than 10 min with various carbonate derivatives studied and the methodology is very efficient for the synthesis of dehydroamino acids and dehydropeptides. The procedure thus provides an easy access to key synthetic precursors and can be used to introduce interesting structural elements to designed peptides. Copyright
Resumo:
Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.
Resumo:
Background: This multicentre, open-label, randomized, controlled phase II study evaluated cilengitide in combination with cetuximab and platinum-based chemotherapy, compared with cetuximab and chemotherapy alone, as first-line treatment of patients with advanced non-small-cell lung cancer (NSCLC). Patients and methods: Patients were randomized 1:1:1 to receive cetuximab plus platinum-based chemotherapy alone (control), or combined with cilengitide 2000 mg 1×/week i.v. (CIL-once) or 2×/week i.v. (CIL-twice). A protocol amendment limited enrolment to patients with epidermal growth factor receptor (EGFR) histoscore ≥200 and closed the CIL-twice arm for practical feasibility issues. Primary end point was progression-free survival (PFS; independent read); secondary end points included overall survival (OS), safety, and biomarker analyses. A comparison between the CIL-once and control arms is reported, both for the total cohorts, as well as for patients with EGFR histoscore ≥200. Results: There were 85 patients in the CIL-once group and 84 in the control group. The PFS (independent read) was 6.2 versus 5.0 months for CIL-once versus control [hazard ratio (HR) 0.72; P = 0.085]; for patients with EGFR histoscore ≥200, PFS was 6.8 versus 5.6 months, respectively (HR 0.57; P = 0.0446). Median OS was 13.6 for CIL-once versus 9.7 months for control (HR 0.81; P = 0.265). In patients with EGFR ≥200, OS was 13.2 versus 11.8 months, respectively (HR 0.95; P = 0.855). No major differences in adverse events between CIL-once and control were reported; nausea (59% versus 56%, respectively) and neutropenia (54% versus 46%, respectively) were the most frequent. There was no increased incidence of thromboembolic events or haemorrhage in cilengitide-treated patients. αvβ3 and αvβ5 expression was neither a predictive nor a prognostic indicator. Conclusions: The addition of cilengitide to cetuximab/chemotherapy indicated potential clinical activity, with a trend for PFS difference in the independent-read analysis. However, the observed inconsistencies across end points suggest additional investigations are required to substantiate a potential role of other integrin inhibitors in NSCLC treatment.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Resumo:
In the title compound, C30H24Cl2N2O3, the two quinoline ring systems are almost planar [maximum deviations = 0.029 (2) and 0.018 (3) angstrom] and the dihedral angle between them is 4.17 (8)degrees. The dihedral angle between the phenyl ring and its attached quinoline ring is 69.06 (13)degrees. The packing is stabilized by C-H center dot center dot center dot O, C-H center dot center dot center dot N, weak pi-pi stacking [centroid-centroid distances = 3.7985 (16) and 3.7662(17) angstrom] and C-H center dot center dot center dot pi interactions.
Resumo:
Stress- and strain-controlled tests of heat treated high-strength rail steel (Australian Standard AS1085.1) have been performed in order to improve the characterisation of the said material׳s ratcheting and fatigue wear behaviour. The hardness of the rail head material has also been studied and it has been found that hardness reduces considerably below four-millimetres from the rail top surface. Historically, researchers have used test coupons with circular cross-sections to conduct cyclic load tests. Such test coupons, typically five-millimetres in gauge diameter and ten‐millimetres in grip diameter, are usually taken from the rail head sample. When there is considerable variation of material properties over the cross-section it becomes likely that localised properties of the rail material will be missed. In another case from the literature, disks 47 mm in diameter for a twin-disk rolling contact test machine were obtained directly from the rail sample and used to validate ratcheting and rolling contact fatigue wear models. The question arises: How accurate are such tests, especially when large material property gradients exist? In this research paper, the effects of rail sampling location on the ratcheting behaviour of AS1085.1 rail steel were investigated using rectangular-shaped specimens obtained at four different depths to observe their respective cyclic plasticity behaviour. The microstructural features of the test coupons were also analysed, especially the pearlite inter-lamellar spacing which showed strong correlation with both hardness and cyclic plasticity behaviour of the material. This work ultimately provides new data and testing methodology to aid the selection of valid parameters for material constitutive models to better understand rail surface ratcheting and wear.
Resumo:
This paper considers the dynamic modelling and motion control of a Surface Effect Ship (SES) for safer transfer of personnel and equipment from vessel to-and-from an offshore wind-turbine. The control system designed is referred to as Boarding Control System (BCS). The performance of this system is investigated for a specific wind-farm service vessel—The Wave Craft. On a SES, the pressurized air cushion supports the majority of the weight of the vessel. The control problem considered relates to the actuation of the pressure such that wave-induced vessel motions are minimized. Results are given through simulation, model- and full-scale experimental testing.
Resumo:
Evaluation and design of shore protection works in the case of tsunamis assumes considerable importance in view of the impact it had in the recent tsunami of 26th December 2004 in India and other countries in Asia. The fact that there are no proper guidelines have made in the matters worse and resulted in the magnitude of damage that occurred. Survey of the damages indicated that the scour as a result of high velocities is one of the prime reasons for damages in the case of simple structures. It is revealed that sea walls in some cases have been helpful to minimize the damages. The objective of this paper is to suggest that design of shore line protection systems using expected wave heights that get generated and use of flexible systems such as geocells is likely to give a better protection. The protection systems can be designed to withstand the wave forces that corresponding to different probabilities of incidence. A design approach of geocells protection system is suggested and illustrated with reference to the data of wave heights in the east coast of India.