956 resultados para semi-physical simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design verification in the digital domain, using model-based principles, is a key research objective to address the industrial requirement for reduced physical testing and prototyping. For complex assemblies, the verification of design and the associated production methods is currently fragmented, prolonged and sub-optimal, as it uses digital and physical verification stages that are deployed in a sequential manner using multiple systems. This paper describes a novel, hybrid design verification methodology that integrates model-based variability analysis with measurement data of assemblies, in order to reduce simulation uncertainty and allow early design verification from the perspective of satisfying key assembly criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60K15, 60K20, 60G20,60J75, 60J80, 60J85, 60-08, 90B15.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite's Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical propoerty data particularly of the frequency dependent magnetic susceptibility in depth and time show (semi)cyclic behaviour, which we ascribe to millennial scale climate variability also seen in the Black Sea region and large parts of the northern hemisphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrated interpretation of multi-beam bathymetric, sediment-penetrating acoustic (PARASOUND) and seismic data show a multiple slope failure on the northern European continental margin, north of Spitsbergen. The first slide event occurred during MIS 3 around 30 cal. ka BP and was characterised by highly dynamic and rapid evacuation of ca. 1250 km**3 of sediment from the lower to the upper part of the continental slope. During this event, headwalls up to 1600 m high were created and ca. 1150 km**3 material from hemi-pelagic sediments and from the lower pre-existing trough mouth fan has been entrained and transported into the semi-enclosed Sophia Basin. This megaslide event was followed by a secondary evacuation of material to the Nansen Basin by funnelling of the debris through the channel between Polarstern Seamount and the adjacent continental slope. The main slide debris is overlain by a set of fining-upward sequences as evidence for the associated suspension cloud and following minor failure events. Subsequent adjustment of the eastern headwalls led to failure of rather soft sediments and creation of smaller debris flows that followed the main slide surficial topography. Discharge of the Hinlopen ice stream during the Last Glacial Maximum and the following deglaciation draped the central headwalls and created a fan deposit of glacigenic debris flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem addressed in this thesis is that a considerable proportion of students around the world attend school in inadequate facilities, which is detrimental for the students’ learning outcome. The overall objective in this thesis is to develop a methodology, with a novel approach to involve teachers, to generate a valuable basis for decisions regarding design and improvement of physical school environment, based on the expressed needs for a specific school, municipality, or district as well as evidence from existing research. Three studies have been conducted to fulfil the objective: (1) a systematic literature review and development of a theoretical model for analysing the role of the physical environment in schools; (2) semi structured interviews with teachers to get their conceptions of the physical school environment; (3) a stated preference study with experimental design as an online survey. Wordings from the transcripts from the interview study were used when designing the survey form. The aim of the stated preference study was to examine the usability of the method when applied in this new context of physical school environment. The result is the methodology with a mixed method chain where the first step involves a broad investigation of the specific circumstances and conceptions for the specific school, municipality, or district. The second step is to use the developed theoretical model and results from the literature study to analyse the results from the first step and transform them in to a format that fits the design of a stated preference study. The final step is a refined version of the procedure of the performed stated preference study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Physical activity in children with intellectual disabilities is a neglected area of study, which is most apparent in relation to physical activity measurement research. Although objective measures, specifically accelerometers, are widely used in research involving children with intellectual disabilities, existing research is based on measurement methods and data interpretation techniques generalised from typically developing children. However, due to physiological and biomechanical differences between these populations, questions have been raised in the existing literature on the validity of generalising data interpretation techniques from typically developing children to children with intellectual disabilities. Therefore, there is a need to conduct population-specific measurement research for children with intellectual disabilities and develop valid methods to interpret accelerometer data, which will increase our understanding of physical activity in this population. Methods Study 1: A systematic review was initially conducted to increase the knowledge base on how accelerometers were used within existing physical activity research involving children with intellectual disabilities and to identify important areas for future research. A systematic search strategy was used to identify relevant articles which used accelerometry-based monitors to quantify activity levels in ambulatory children with intellectual disabilities. Based on best practice guidelines, a novel form was developed to extract data based on 17 research components of accelerometer use. Accelerometer use in relation to best practice guidelines was calculated using percentage scores on a study-by-study and component-by-component basis. Study 2: To investigate the effect of data interpretation methods on the estimation of physical activity intensity in children with intellectual disabilities, a secondary data analysis was conducted. Nine existing sets of child-specific ActiGraph intensity cut points were applied to accelerometer data collected from 10 children with intellectual disabilities during an activity session. Four one-way repeated measures ANOVAs were used to examine differences in estimated time spent in sedentary, moderate, vigorous, and moderate to vigorous intensity activity. Post-hoc pairwise comparisons with Bonferroni adjustments were additionally used to identify where significant differences occurred. Study 3: The feasibility on a laboratory-based calibration protocol developed for typically developing children was investigated in children with intellectual disabilities. Specifically, the feasibility of activities, measurements, and recruitment was investigated. Five children with intellectual disabilities and five typically developing children participated in 14 treadmill-based and free-living activities. In addition, resting energy expenditure was measured and a treadmill-based graded exercise test was used to assess cardiorespiratory fitness. Breath-by-breath respiratory gas exchange and accelerometry were continually measured during all activities. Feasibility was assessed using observations, activity completion rates, and respiratory data. Study 4: Thirty-six children with intellectual disabilities participated in a semi-structured school-based physical activity session to calibrate accelerometry for the estimation of physical activity intensity. Participants wore a hip-mounted ActiGraph wGT3X+ accelerometer, with direct observation (SOFIT) used as the criterion measure. Receiver operating characteristic curve analyses were conducted to determine the optimal accelerometer cut points for sedentary, moderate, and vigorous intensity physical activity. Study 5: To cross-validate the calibrated cut points and compare classification accuracy with existing cut points developed in typically developing children, a sub-sample of 14 children with intellectual disabilities who participated in the school-based sessions, as described in Study 4, were included in this study. To examine the validity, classification agreement was investigated between the criterion measure of SOFIT and each set of cut points using sensitivity, specificity, total agreement, and Cohen’s kappa scores. Results Study 1: Ten full text articles were included in this review. The percentage of review criteria met ranged from 12%−47%. Various methods of accelerometer use were reported, with most use decisions not based on population-specific research. A lack of measurement research, specifically the calibration/validation of accelerometers for children with intellectual disabilities, is limiting the ability of researchers to make appropriate and valid accelerometer use decisions. Study 2: The choice of cut points had significant and clinically meaningful effects on the estimation of physical activity intensity and sedentary behaviour. For the 71-minute session, estimations for time spent in each intensity between cut points ranged from: sedentary = 9.50 (± 4.97) to 31.90 (± 6.77) minutes; moderate = 8.10 (± 4.07) to 40.40 (± 5.74) minutes; vigorous = 0.00 (± .00) to 17.40 (± 6.54) minutes; and moderate to vigorous = 8.80 (± 4.64) to 46.50 (± 6.02) minutes. Study 3: All typically developing participants and one participant with intellectual disabilities completed the protocol. No participant met the maximal criteria for the graded exercise test or attained a steady state during the resting measurements. Limitations were identified with the usability of respiratory gas exchange equipment and the validity of measurements. The school-based recruitment strategy was not effective, with a participation rate of 6%. Therefore, a laboratory-based calibration protocol was not feasible for children with intellectual disabilities. Study 4: The optimal vertical axis cut points (cpm) were ≤ 507 (sedentary), 1008−2300 (moderate), and ≥ 2301 (vigorous). Sensitivity scores ranged from 81−88%, specificity 81−85%, and AUC .87−.94. The optimal vector magnitude cut points (cpm) were ≤ 1863 (sedentary), ≥ 2610 (moderate) and ≥ 4215 (vigorous). Sensitivity scores ranged from 80−86%, specificity 77−82%, and AUC .86−.92. Therefore, the vertical axis cut points provide a higher level of accuracy in comparison to the vector magnitude cut points. Study 5: Substantial to excellent classification agreement was found for the calibrated cut points. The calibrated sedentary cut point (ĸ =.66) provided comparable classification agreement with existing cut points (ĸ =.55−.67). However, the existing moderate and vigorous cut points demonstrated low sensitivity (0.33−33.33% and 1.33−53.00%, respectively) and disproportionately high specificity (75.44−.98.12% and 94.61−100.00%, respectively), indicating that cut points developed in typically developing children are too high to accurately classify physical activity intensity in children with intellectual disabilities. Conclusions The studies reported in this thesis are the first to calibrate and validate accelerometry for the estimation of physical activity intensity in children with intellectual disabilities. In comparison with typically developing children, children with intellectual disabilities require lower cut points for the classification of moderate and vigorous intensity activity. Therefore, generalising existing cut points to children with intellectual disabilities will underestimate physical activity and introduce systematic measurement error, which could be a contributing factor to the low levels of physical activity reported for children with intellectual disabilities in previous research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The eutrofization is a natural process of accumulation of nutrients in aquatic´s body that it has been accelerated for the human´s actives, mainly the related with the activities of camp, industrial and the inadequate disposition of the domestic sewage. The enrichment of the aquatic´s body with nutrients, mainly the nitrogen and the phosphorus, and the consequent proliferation of algae and Cyanobacteria can commit the quality of the water for the public provisioning, for the fish farming and for other ends. The problem becomes more critical when there is a shortage of water naturally as in the semi-arid area of the Brazilian northeast. Before that problem this work had as objective evaluates the trophic state of six reservoirs of the basin of River Seridó of Rio Grande of Norte and also estimate the capacity of load of match of the reservoir and risk probabilities based on the established limits by the resolution Conama 357/05. The results demonstrate that the six reservoirs are eutrofization, with concentration of total phosphorus and cloro a in the water upster to 50 e 12 μg l-1. The results show that space homogeneity exists in the state trophic of the reservoirs, but a significant variation interanual in function of the increase of the concentrations of nutrients and decrease of the transparency of the water with the reduction of the body of water accumulated in the reservoirs.The results of the simulation risk estocastic show that the reservoirs could receive annually from 72 to 216 Kg of P, assuming a risk of 10% of increasing in more than 30 μg l-1 the annual medium concentrations of total match in the water of these reservoirs. This load could be high in until 360 kg of P a year in case the managers assume a risk of 10% of increasing in more than 50 μg l-1 the annual medium concentrations of total phosphorus in the waters of these reservoirs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le programme d’études Techniques d’inhalothérapie révisé par le ministère de l’Éducation est implanté en 1997 selon une approche par compétences. Ce nouveau programme propose alors des compétences centrées sur la tâche, indépendamment de la clientèle visée, ce qui provoque le morcellement des contenus spécifiques aux différentes clientèles, notamment la clientèle pédiatrique. Depuis plusieurs années, les différentes instances d’évaluation du programme 141 – Techniques d’inhalothérapie relèvent des lacunes reliées au développement des compétences et à l’exposition de situations d’apprentissage spécifiques à la clientèle pédiatrique. L’intégration de simulations par immersion clinique est donc envisagée comme piste de solution, premièrement pour mobiliser les connaissances antérieures reliées à la clientèle pédiatrique qui ont été morcelées à travers le programme, et deuxièmement en permettant une exposition uniforme aux situations cliniques jugées incontournables dans la formation des inhalothérapeutes au Cégep de Sherbrooke. La simulation par immersion clinique est reconnue comme une activité pédagogique soutenant le développement du raisonnement clinique, rehaussant la préparation clinique et favorisant la confiance professionnelle chez les étudiants et les étudiantes en formation initiale. Toutefois, sera-t-elle un outil permettant le transfert des connaissances antérieures ? À travers les quatre composantes de l’activité de simulation par immersion clinique, retrouve-t-on une composante favorisant plus qu’une autre ce transfert des connaissances ? Un devis mixte joignant le recueil de données quantitatives et qualitatives a permis d’analyser la perception des finissants et finissantes en Techniques d’inhalothérapie ainsi que la perception d’enseignantes et d’inhalothérapeutes sur les attributs des quatre composantes de la simulation par immersion clinique favorisant le transfert des connaissances antérieures. Un questionnaire et des entrevues semi-dirigées, incluant le visionnement des activités de simulation par immersion clinique, ont permis cette collecte de données. Selon la perception de tous les participantes et les participants à cette recherche exploratoire, le transfert des connaissances antérieures est présent à l’intérieur des quatre composantes de la simulation par immersion clinique: le breffage, l’action, l’observation et particulièrement lors du débreffage. Il est toutefois difficile d’isoler une seule composante, puisque les données qualitatives dégagent des perceptions fermes sur les attributs de chacune des composantes favorisant le transfert, joignant des arguments soulevant les différents styles d’apprenants. Sachant que la simulation par immersion clinique n’est pas une activité pédagogique monolithique, est-il pertinent de croire qu’elle mobilise des particularités relatives aux différents styles d’apprentissage à travers ses composantes? Existe-t-il des styles en lien entre le transfert des connaissances antérieures et les styles d’apprentissage? Bien que cette recherche exploratoire propose la perception qu’il existe un processus de transfert des connaissances à l’intérieur des quatre composantes de la simulation par immersion clinique, les résultats de recherche ne peuvent être généralisés, notamment en raison du nombre de participants. Il sera donc pertinent de poursuivre la recherche sur le sujet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Receiving personalised feedback on body mass index and other health risk indicators may prompt behaviour change. Few studies have investigated men’s reactions to receiving objective feedback on such measures and detailed information on physical activity and sedentary time. The aim of my research was to understand the meanings different forms of objective feedback have for overweight/obese men, and to explore whether these varied between groups. Participants took part in Football Fans in Training, a gender-sensitised, weight loss programme delivered via Scottish Professional Football Clubs. Semi-structured interviews were conducted with 28 men, purposively sampled from four clubs to investigate the experiences of men who achieved and did not achieve their 5% weight loss target. Data were analysed using the principles of thematic analysis and interpreted through Self-Determination Theory and sociological understandings of masculinity. Several factors were vital in supporting a ‘motivational climate’ in which men could feel ‘at ease’ and adopt self-regulation strategies: the ‘place’ was described as motivating, whereas the ‘people’ (other men ‘like them’; fieldwork staff; community coaches) provided supportive and facilitative roles. Men who achieved greater weight loss were more likely to describe being motivated as a consequence of receiving information on their objective health risk indicators. They continued using self-monitoring technologies after the programme as it was enjoyable; or they had redefined themselves by integrating new-found activities into their lives and no longer relied on external technologies/feedback. They were more likely to see post-programme feedback as confirmation of success, so long as they could fully interpret the information. Men who did not achieve their 5% weight loss reported no longer being motivated to continue their activity levels or self-monitor them with a pedometer. Social support within the programme appeared more important. These men were also less positive about objective post-programme feedback which confirmed their lack of success and had less utility as a motivational tool. Providing different forms of objective feedback to men within an environment that has intrinsic value (e.g. football club setting) and congruent with common cultural constructions of masculinity, appears more conducive to health behaviour change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.