856 resultados para Tilting and cotilting modules
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
The electric car, the all electric aircraft and requirements for renewable energy are examples of potential technologies needed to address the world problem of global warming/carbon emission etc. Power electronics and packaged modules are fundamental for the underpinning of these technologies and with the diverse requirements for electrical configurations and the range of environmental conditions, time to market is paramount for module manufacturers and systems designers alike. This paper details some of the results from a major UK project into the reliability of power electronic modules using physics of failure techniques. This paper presents a design methodology together with results that demonstrate enhanced product design with improved reliability, performance and value within acceptable time scales
Resumo:
Simulation of the autoclave manufacturing technique of composites can yield a preliminary estimation of induced residual thermal stresses and deformations that affect component fatigue life, and required tolerances for assembly. In this paper, an approach is proposed to simulate the autoclave manufacturing technique for unidirectional composites. The proposed approach consists of three modules. The first module is a Thermo-chemical model to estimate the temperature and the degree of cure distributions in the composite part during the cure cycle. The second and third modules are a sequential stress analysis using FE-Implicit and FE-Explicit respectively. User-material subroutine is used to model the Viscoelastic properties of the material based on theory of micromechanics.
Resumo:
Virtual manufacturing of composites can yield an initial early estimation of the induced residual thermal stresses that affect component fatigue life, and deformations that affect required tolerances for assembly. Based on these estimation, the designer can make early decisions, which can help in reducing cost, regarding changes in part design or material properties. In this paper, an approach is proposed to simulate the autoclave manufacturing technique for unidirectional composites. The proposed approach consists of three modules. The first module is a Thermochemical model to estimate temperature and the degree of cure distributions in the composite part during the cure cycle. The second and third modules are stress analysis using FE-Implicit and FE-Explicit respectively. User-material subroutine will be used to model the Viscoelastic properties of the material based on micromechanical theory. Estimated deformation of the composite part can be corrected during the autoclave process by modifying the process-tool design. The deformed composite surface is sent to CATIA for design modification of the process-tool.
Resumo:
The renewed concern in assessing risks and consequences from technological hazards in industrial and urban areas continues emphasizing the development of local-scale consequence analysis (CA) modelling tools able to predict shortterm pollution episodes and exposure effects on humans and the environment in case of accident with hazardous gases (hazmat). In this context, the main objective of this thesis is the development and validation of the EFfects of Released Hazardous gAses (EFRHA) model. This modelling tool is designed to simulate the outflow and atmospheric dispersion of heavy and passive hazmat gases in complex and build-up areas, and to estimate the exposure consequences of short-term pollution episodes in accordance to regulatory/safety threshold limits. Five main modules comprising up-to-date methods constitute the model: meteorological, terrain, source term, dispersion, and effects modules. Different initial physical states accident scenarios can be examined. Considered the main core of the developed tool, the dispersion module comprises a shallow layer modelling approach capable to account the main influence of obstacles during the hazmat gas dispersion phenomena. Model validation includes qualitative and quantitative analyses of main outputs by the comparison of modelled results against measurements and/or modelled databases. The preliminary analysis of meteorological and source term modules against modelled outputs from extensively validated models shows the consistent description of ambient conditions and the variation of the hazmat gas release. Dispersion is compared against measurements observations in obstructed and unobstructed areas for different release and dispersion scenarios. From the performance validation exercise, acceptable agreement was obtained, showing the reasonable numerical representation of measured features. In general, quality metrics are within or close to the acceptance limits recommended for ‘non-CFD models’, demonstrating its capability to reasonably predict hazmat gases accidental release and atmospheric dispersion in industrial and urban areas. EFRHA model was also applied to a particular case study, the Estarreja Chemical Complex (ECC), for a set of accidental release scenarios within a CA scope. The results show the magnitude of potential effects on the surrounding populated area and influence of the type of accident and the environment on the main outputs. Overall the present thesis shows that EFRHA model can be used as a straightforward tool to support CA studies in the scope of training and planning, but also, to support decision and emergency response in case of hazmat gases accidental release in industrial and built-up areas.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.
Resumo:
Globalization has been accompanied by the rapid spread of infectious diseases, and further strain on working conditions for health workers globally. Post-SARS, Canadian occupational health and infection control researchers got together to study how to better protect health workers, and found that training was indeed perceived as key to a positive safety culture. This led to developing information and communication technology (ICT) tools. The research conducted also showed the need for better workplace inspections, so a workplace audit tool was also developed to supplement worker questionnaires and the ICT. When invited to join Ecuadorean colleagues to promote occupational health and infection control, these tools were collectively adapted and improved, including face-to-face as well as on-line problem-based learning scenarios. The South African government then invited the team to work with local colleagues to improve occupational health and infection control, resulting in an improved web-based health information system to track incidents, exposures, and occupational injury and diseases. As the H1N1 pandemic struck, the online infection control course was adapted and translated into Spanish, as was a novel skill-building learning tool that permits health workers to practice selecting personal protective equipment. This tool was originally developed in collaboration with the countries from the Caribbean region and the Pan American Health Organization (PAHO). Research from these experiences led to strengthened focus on building capacity of health and safety committees, and new modules are thus being created, informed by that work. The products developed have been widely heralded as innovative and interactive, leading to their inclusion into “toolkits” used internationally. The tools used in Canada were substantially improved from the collaborative adaptation process for South and Central America and South Africa. This international collaboration between occupational health and infection control researchers led to the improvement of the research framework and development of tools, guidelines and information systems. Furthermore, the research and knowledge-transfer experience highlighted the value of partnership amongst Northern and Southern researchers in terms of sharing resources, experiences and knowledge.
Resumo:
A new wire mechanism called Redundant Drive Wire Mechanism (RDWM) is proposed. The purpose of this paper is to build up the theory of a RDWM with fast motion and fine motion. First, the basic concepts of the proposed mechanism is presented. Second, the vector closure condition for the proposed mechanism is developed. Next, we present the basic equations, propose the basic structure of RDWM with the Internal DOF module, Double Actuation Modules and Precision Modules together with the properties of the mechanism. Finally, we conduct the simulation to show the validity of the RDWM.
Resumo:
The present study examines knowledge of the discourse-appropriateness of Clitic Right Dislocation (CLRD) in a population of Heritage (HS) and Spanish-dominant Native Speakers in order to test the predictions of the Interface Hypothesis (IH; Sorace 2011). The IH predicts that speakers in language contact situations will experience difficulties with integrating information involving the interface of syntax and discourse modules. CLRD relates a dislocated constituent to a discourse antecedent, requiring integration of syntax and pragmatics. Results from an acceptability judgment task did not support the predictions of the IH. No statistical differences between the HSs’ performance and that of L1-dominant native speakers were evidenced when participants were presented with an offline task. Thus, our study did not find any evidence of “incomplete acquisition” (Montrul 2008) as it pertains to this specific linguistic structure.
Resumo:
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Abstract Background Tobacco and cannabis use are strongly interrelated, but current national and international cessation programs typically focus on one substance, and address the other substance either only marginally or not at all. This study aimed to identify the demand for, and describe the development and content of, the first integrative group cessation program for co-smokers of cigarettes and cannabis. Methods First, a preliminary study using expert interviews, user focus groups with (ex-)smokers, and an online survey was conducted to investigate the demand for, and potential content of, an integrative smoking cessation program (ISCP) for tobacco and cannabis co-smokers. This study revealed that both experts and co-smokers considered an ISCP to be useful but expected only modest levels of readiness for participation.Based on the findings of the preliminary study, an interdisciplinary expert team developed a course concept and a recruitment strategy. The developed group cessation program is based on current treatment techniques (such as motivational interviewing, cognitive behavioural therapy, and self-control training) and structured into six course sessions.The program was evaluated regarding its acceptability among participants and course instructors. Results Both the participants and course instructors evaluated the course positively. Participants and instructors especially appreciated the group discussions and the modules that were aimed at developing personal strategies that could be applied during simultaneous cessation of tobacco and cannabis, such as dealing with craving, withdrawal, and high-risk situations. Conclusions There is a clear demand for a double cessation program for co-users of cigarettes and cannabis, and the first group cessation program tailored for these users has been developed and evaluated for acceptability. In the near future, the feasibility of the program will be evaluated. Trial registration Current Controlled Trials ISRCTN15248397
Resumo:
The Northern Apennines (NA) chain is the expression of the active plate margin between Europe and Adria. Given the low convergence rates and the moderate seismic activity, ambiguities still occur in defining a seismotectonic framework and many different scenarios have been proposed for the mountain front evolution. Differently from older models that indicate the mountain front as an active thrust at the surface, a recently proposed scenario describes the latter as the frontal limb of a long-wavelength fold (> 150 km) formed by a thrust fault tipped around 17 km at depth, and considered as the active subduction boundary. East of Bologna, this frontal limb is remarkably very straight and its surface is riddled with small, but pervasive high- angle normal faults. However, west of Bologna, some recesses are visible along strike of the mountain front: these perturbations seem due to the presence of shorter wavelength (15 to 25 km along strike) structures showing both NE and NW-vergence. The Pleistocene activity of these structures was already suggested, but not quantitative reconstructions are available in literature. This research investigates the tectonic geomorphology of the NA mountain front with the specific aim to quantify active deformations and infer possible deep causes of both short- and long-wavelength structures. This study documents the presence of a network of active extensional faults, in the foothills south and east of Bologna. For these structures, the strain rate has been measured to find a constant throw-to-length relationship and the slip rates have been compared with measured rates of erosion. Fluvial geomorphology and quantitative analysis of the topography document in detail the active tectonics of two growing domal structures (Castelvetro - Vignola foothills and the Ghiardo plateau) embedded in the mountain front west of Bologna. Here, tilting and river incision rates (interpreted as that long-term uplift rates) have been measured respectively at the mountain front and in the Enza and Panaro valleys, using a well defined stratigraphy of Pleistocene to Holocene river terraces and alluvial fan deposits as growth strata, and seismic reflection profiles relationships. The geometry and uplift rates of the anticlines constrain a simple trishear fault propagation folding model that inverts for blind thrust ramp depth, dip, and slip. Topographic swath profiles and the steepness index of river longitudinal profiles that traverse the anti- clines are consistent with stratigraphy, structures, aquifer geometry, and seismic reflection profiles. Available focal mechanisms of earthquakes with magnitude between Mw 4.1 to 5.4, obtained from a dataset of the instrumental seismicity for the last 30 years, evidence a clear vertical separation at around 15 km between shallow extensional and deeper compressional hypocenters along the mountain front and adjacent foothills. In summary, the studied anticlines appear to grow at rates slower than the growing rate of the longer- wavelength structure that defines the mountain front of the NA. The domal structures show evidences of NW-verging deformation and reactivations of older (late Neogene) thrusts. The reconstructed river incision rates together with rates coming from several other rivers along a 250 km wide stretch of the NA mountain front and recently available in the literature, all indicate a general increase from Middle to Late Pleistocene. This suggests focusing of deformation along a deep structure, as confirmed by the deep compressional seismicity. The maximum rate is however not constant along the mountain front, but varies from 0.2 mm/yr in the west to more than 2.2 mm/yr in the eastern sector, suggesting a similar (eastward-increasing) trend of the apenninic subduction.
Resumo:
Minerals isostructural with sapphirine-1A, sapphirine-2M, and surinamite are closely related chain silicates that pose nomenclature problems because of the large number of sites and potential constituents, including several (Be, B, As, Sb) that are rare or absent in other chain silicates. Our recommended nomenclature for the sapphirine group (formerly-aenigmatite group) makes extensive use of precedent, but applies the rules to all known natural compositions, with flexibility to allow for yet undiscovered compositions such as those reported in synthetic materials. These minerals are part of a polysomatic series composed of pyroxene or pyroxene-like and spinel modules, and thus we recommend that the sapphirine supergroup should encompass the polysomatic series. The first level in the classification is based on polysome, i.e. each group within the supergroup Corresponds to a single polysome. At the second level, the sapphirine group is divided into subgroups according to the occupancy of the two largest M sites, namely, sapphirine (Mg), aenigmatite (Na), and rhonite (Ca). Classification at the third level is based on the occupancy of the smallest M site with most shared edges, M7, at which the dominant cation is most often Ti (aenigmatite, rhonite, makarochkinite), Fe(3+) (wilkinsonite, dorrite, hogtuvaite) or Al (sapphirine, khmaralite); much less common is Cr (krinovite) and Sb (welshite). At the fourth level, the two most polymerized T sites are considered together, e.g. ordering of Be at these sites distinguishes hogtuvaite, makarochkinite and khmaralite. Classification at the fifth level is based on X(Mg) = Mg/(Mg + Fe(2+)) at the M sites (excluding the two largest and M7). In principle, this criterion could be expanded to include other divalent cations at these sites, e.g. Mn. To date, most minerals have been found to be either Mg-dominant (X(mg) > 0.5), or Fe(2+)-dominant (X(Mg) < 0.5), at these M sites. However, X(mg) ranges from 1.00 to 0.03 in material described as rhonite, i.e. there are two species present, one Mg-dominant, the other Fe(2+)-dominant. Three other potentially new species are a Mg-dominant analogue of wilkinsonite, rhonite in the Allende meteorite, which is distinguished front rhonite and dorrite in that Mg rather than Ti or FC(3+) is dominant at M7, and an Al-dominant analogue of sapphirine, in which Al > Si at the two most polymerized T sites vs. Al < Si in sapphirine. Further splitting of the supergroup based on occupancies other than those specified above is not recommended.
Resumo:
BACKGROUND: The robotics-assisted tilt table (RATT), including actuators for tilting and cyclical leg movement, is used for rehabilitation of severely disabled neurological patients. Following further engineering development of the system, i.e. the addition of force sensors and visual bio-feedback, patients can actively participate in exercise testing and training on the device. Peak cardiopulmonary performance parameters were previously investigated, but it also important to compare submaximal parameters with standard devices. The aim of this study was to evaluate the feasibility of the RATT for estimation of submaximal exercise thresholds by comparison with a cycle ergometer and a treadmill. METHODS: 17 healthy subjects randomly performed six maximal individualized incremental exercise tests, with two tests on each of the three exercise modalities. The ventilatory anaerobic threshold (VAT) and respiratory compensation point (RCP) were determined from breath-by-breath data. RESULTS: VAT and RCP on the RATT were lower than the cycle ergometer and the treadmill: oxygen uptake (V'O2) at VAT was [mean (SD)] 1.2 (0.3), 1.5 (0.4) and 1.6 (0.5) L/min, respectively (p < 0.001); V'O2 at RCP was 1.7 (0.4), 2.3 (0.8) and 2.6 (0.9) L/min, respectively (p = 0.001). High correlations for VAT and RCP were found between the RATT vs the cycle ergometer and RATT vs the treadmill (R on the range 0.69-0.80). VAT and RCP demonstrated excellent test-retest reliability for all three devices (ICC from 0.81 to 0.98). Mean differences between the test and retest values on each device were close to zero. The ventilatory equivalent for O2 at VAT for the RATT and cycle ergometer were similar and both were higher than the treadmill. The ventilatory equivalent for CO2 at RCP was similar for all devices. Ventilatory equivalent parameters demonstrated fair-to-excellent reliability and repeatability. CONCLUSIONS: It is feasible to use the RATT for estimation of submaximal exercise thresholds: VAT and RCP on the RATT were lower than the cycle ergometer and the treadmill, but there were high correlations between the RATT vs the cycle ergometer and vs the treadmill. Repeatability and test-retest reliability of all submaximal threshold parameters from the RATT were comparable to those of standard devices.