942 resultados para Non-uniform heat intensity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to a crime epidemic afflicting Latin America since the early 1990s, several countries in the region have resorted to using heavy-force police or military units to physically retake territories de facto controlled by non-State criminal or insurgent groups. After a period of territory control, the heavy forces hand law enforcement functions in the retaken territories to regular police officers, with the hope that the territories and their populations will remain under the control of the state. To a varying degree, intensity, and consistency, Brazil, Colombia, Mexico, and Jamaica have adopted such policies since the mid-1990s. During such operations, governments need to pursue two interrelated objectives: to better establish the state’s physical presence and to realign the allegiance of the population in those areas toward the state and away from the non-State criminal entities. From the perspective of law enforcement, such operations entail several critical decisions and junctions, such as: Whether or not to announce the force insertion in advance. The decision trades off the element of surprise and the ability to capture key leaders of the criminal organizations against the ability to minimize civilian casualties and force levels. The latter, however, may allow criminals to go to ground and escape capture. Governments thus must decide whether they merely seek to displace criminal groups to other areas or maximize their decapitation capacity. Intelligence flows rarely come from the population. Often, rival criminal groups are the best source of intelligence. However, cooperation between the State and such groups that goes beyond using vetted intelligence provided by the groups, such as a State tolerance for militias, compromises the rule-of-law integrity of the State and ultimately can eviscerate even public safety gains. Sustaining security after initial clearing operations is at times even more challenging than conducting the initial operations. Although unlike the heavy forces, traditional police forces, especially if designed as community police, have the capacity to develop trust of the community and ultimately focus on crime prevention, developing such trust often takes a long time. To develop the community’s trust, regular police forces need to conduct frequent on-foot patrols with intensive nonthreatening interactions with the population and minimize the use of force. Moreover, sufficiently robust patrol units need to be placed in designated beats for substantial amount of time, often at least over a year. Establishing oversight mechanisms, including joint police-citizens’ boards, further facilities building trust in the police among the community. After disruption of the established criminal order, street crime often significantly rises and both the heavy-force and community-police units often struggle to contain it. The increase in street crime alienates the population of the retaken territory from the State. Thus developing a capacity to address street crime is critical. Moreover, the community police units tend to be vulnerable (especially initially) to efforts by displaced criminals to reoccupy the cleared territories. Losing a cleared territory back to criminal groups is extremely costly in terms of losing any established trust and being able to recover it. Rather than operating on a priori determined handover schedule, a careful assessment of the relative strength of regular police and criminal groups post-clearing operations is likely to be a better guide for timing the handover from heavy forces to regular police units. Cleared territories often experience not only a peace dividend, but also a peace deficit – in the rise new serious crime (in addition to street crime). Newly – valuable land and other previously-inaccessible resources can lead to land speculation and forced displacement; various other forms of new crime can also significantly rise. Community police forces often struggle to cope with such crime, especially as it is frequently linked to legal business. Such new crime often receives little to no attention in the design of the operations to retake territories from criminal groups. But without developing an effective response to such new crime, the public safety gains of the clearing operations can be altogether lost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a low-energy glow-discharge process using reactive ion etching system that enables non-circular device patterns, such as squares or hexagons, to be formed from a precursor array of uniform circular openings in polymethyl methacrylate, PMMA, defined by electron beam lithography. This technique is of a particular interest for bit-patterned magnetic recording medium fabrication, where close packed square magnetic bits may improve its recording performance. The process and results of generating close packed square patterns by self-limiting low-energy glow-discharge are investigated. Dense magnetic arrays formed by electrochemical deposition of nickel over self-limiting formed molds are demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been estimated that one in four adults have sedentary lifestyles. In addition there appears to be an increase in obesity across the life span. It is of great importance to the health of this nation to understand how to promote more active lifestyles through the identification of lifestyle behaviors of active individuals and potential predictors of physical activity (PA). Seven hundred and seventy-seven college students were surveyed to investigate the relationship between nutrition related variables (i.e., dietary restraint, nutrition knowledge, food choice and body weight concerns) and PA. In this study, over half of the students reported doing 30 minutes of moderate intensity PA daily. Vigorously active males and females chose low fat foods more often than the less active group. Exercisers and non-exercisers had similar nutrition knowledge. The results of this study suggest that students who are more active are more conscience about making healthier food choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major problems in the analysis of beams with Moment of Inertia varying along their length, is to find the Fixed End Moments, Stiffness, and Carry-Over Factors. In order to determine Fixed End Moments, it is necessary to consider the non-prismatic member as integrated by a large number of small sections with constant Moment of Inertia, and to find the M/EI values for each individual section. This process takes a lot of time from Designers and Structural Engineers. The object of this thesis is to design a computer program to simplify this repetitive process, obtaining rapidly and effectively the Final Moments and Shears in continuous non-prismatic Beams. For this purpose the Column Analogy and the Moment Distribution Methods of Professor Hardy Cross have been utilized as the principles toward the methodical computer solutions. The program has been specifically designed to analyze continuous beams of a maximum of four spans of any length, integrated by symmetrical members with rectangular cross sections and with rectilinear variation of the Moment of Inertia. Any load or combination of uniform and concentrated loads must be considered. Finally sample problems will be solved with the new Computer Program and with traditional systems, to determine the accuracy and applicability of the Program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A uniform chronology for foraminifera-based sea surface temperature records has been established in more than 120 sediment cores obtained from the equatorial and eastern Atlantic up to the Arctic Ocean. The chronostratigraphy of the last 30,000 years is mainly based on published d18O records and 14C ages from accelerator mass spectrometry, converted into calendar-year ages. The high-precision age control provides the database necessary for the uniform reconstruction of the climate interval of the Last Glacial Maximum within the GLAMAP-2000 project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Permafrost degradation influences the morphology, biogeochemical cycling and hydrology of Arctic landscapes over a range of time scales. To reconstruct temporal patterns of early to late Holocene permafrost and thermokarst dynamics, site-specific palaeo-records are needed. Here we present a multi-proxy study of a 350-cm-long permafrost core from a drained lake basin on the northern Seward Peninsula, Alaska, revealing Lateglacial to Holocene thermokarst lake dynamics in a central location of Beringia. Use of radiocarbon dating, micropalaeontology (ostracods and testaceans), sedimentology (grain-size analyses, magnetic susceptibility, tephra analyses), geochemistry (total nitrogen and carbon, total organic carbon, d13Corg) and stable water isotopes (d18O, dD, d excess) of ground ice allowed the reconstruction of several distinct thermokarst lake phases. These include a pre-lacustrine environment at the base of the core characterized by the Devil Mountain Maar tephra (22 800±280 cal. a BP, Unit A), which has vertically subsided in places due to subsequent development of a deep thermokarst lake that initiated around 11 800 cal. a BP (Unit B). At about 9000 cal. a BP this lake transitioned from a stable depositional environment to a very dynamic lake system (Unit C) characterized by fluctuating lake levels, potentially intermediate wetland development, and expansion and erosion of shore deposits. Complete drainage of this lake occurred at 1060 cal. a BP, including post-drainage sediment freezing from the top down to 154 cm and gradual accumulation of terrestrial peat (Unit D), as well as uniform upward talik refreezing. This core-based reconstruction of multiple thermokarst lake generations since 11 800 cal. a BP improves our understanding of the temporal scales of thermokarst lake development from initiation to drainage, demonstrates complex landscape evolution in the ice-rich permafrost regions of Central Beringia during the Lateglacial and Holocene, and enhances our understanding of biogeochemical cycles in thermokarst-affected regions of the Arctic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.

In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.

Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.

In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.

Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.

Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.

Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Sickle Cell Disease (SCD) is a genetic hematological disorder that affects more than 7 million people globally (NHLBI, 2009). It is estimated that 50% of adults with SCD experience pain on most days, with 1/3 experiencing chronic pain daily (Smith et al., 2008). Persons with SCD also experience higher levels of pain catastrophizing (feelings of helplessness, pain rumination and magnification) than other chronic pain conditions, which is associated with increases in pain intensity, pain behavior, analgesic consumption, frequency and duration of hospital visits, and with reduced daily activities (Sullivan, Bishop, & Pivik, 1995; Keefe et al., 2000; Gil et al., 1992 & 1993). Therefore effective interventions are needed that can successfully be used manage pain and pain-related outcomes (e.g., pain catastrophizing) in persons with SCD. A review of the literature demonstrated limited information regarding the feasibility and efficacy of non-pharmacological approaches for pain in persons with SCD, finding an average effect size of .33 on pain reduction across measurable non-pharmacological studies. Second, a prospective study on persons with SCD that received care for a vaso-occlusive crisis (VOC; N = 95) found: (1) high levels of patient reported depression (29%) and anxiety (34%), and (2) that unemployment was significantly associated with increased frequency of acute care encounters and hospital admissions per person. Research suggests that one promising category of non-pharmacological interventions for managing both physical and affective components of pain are Mindfulness-based Interventions (MBIs; Thompson et al., 2010; Cox et al., 2013). The primary goal of this dissertation was thus to develop and test the feasibility, acceptability, and efficacy of a telephonic MBI for pain catastrophizing in persons with SCD and chronic pain.

Methods: First, a telephonic MBI was developed through an informal process that involved iterative feedback from patients, clinical experts in SCD and pain management, social workers, psychologists, and mindfulness clinicians. Through this process, relevant topics and skills were selected to adapt in each MBI session. Second, a pilot randomized controlled trial was conducted to test the feasibility, acceptability, and efficacy of the telephonic MBI for pain catastrophizing in persons with SCD and chronic pain. Acceptability and feasibility were determined by assessment of recruitment, attrition, dropout, and refusal rates (including refusal reasons), along with semi-structured interviews with nine randomly selected patients at the end of study. Participants completed assessments at baseline, Week 1, 3, and 6 to assess efficacy of the intervention on decreasing pain catastrophizing and other pain-related outcomes.

Results: A telephonic MBI is feasible and acceptable for persons with SCD and chronic pain. Seventy-eight patients with SCD and chronic pain were approached, and 76% (N = 60) were enrolled and randomized. The MBI attendance rate, approximately 57% of participants completing at least four mindfulness sessions, was deemed acceptable, and participants that received the telephonic MBI described it as acceptable, easy to access, and consume in post-intervention interviews. The amount of missing data was undesirable (MBI condition, 40%; control condition, 25%), but fell within the range of expected missing outcome data for a RCT with multiple follow-up assessments. Efficacy of the MBI on pain catastrophizing could not be determined due to small sample size and degree of missing data, but trajectory analyses conducted for the MBI condition only trended in the right direction and pain catastrophizing approached statistically significance.

Conclusion: Overall results showed that at telephonic group-based MBI is acceptable and feasible for persons with SCD and chronic pain. Though the study was not able to determine treatment efficacy nor powered to detect a statistically significant difference between conditions, participants (1) described the intervention as acceptable, and (2) the observed effect sizes for the MBI condition demonstrated large effects of the MBI on pain catastrophizing, mental health, and physical health. Replication of this MBI study with a larger sample size, active control group, and additional assessments at the end of each week (e.g., Week 1 through Week 6) is needed to determine treatment efficacy. Many lessons were learned that will guide the development of future studies including which MBI strategies were most helpful, methods to encourage continued participation, and how to improve data capture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the evaporation of non-ideal liquid mixtures using a multicomponent mass transfer approach. It develops the concept of evaporation maps as a convenient way of representing the dynamic composition changes of ternary mixtures during an evaporation process. Evaporation maps represent the residual composition of evaporating ternary non-ideal mixtures over the full range of composition, and are analogous to the commonly-used residue curve maps of simple distillation processes. The evaporation process initially considered in this work involves gas-phase limited evaporation from a liquid or wetted-solid surface, over which a gas flows at known conditions. Evaporation may occur into a pure inert gas, or into one pre-loaded with a known fraction of one of the ternary components. To explore multicomponent masstransfer effects, a model is developed that uses an exact solution to the Maxwell-Stefan equations for mass transfer in the gas film, with a lumped approach applied to the liquid phase. Solutions to the evaporation model take the form of trajectories in temperaturecomposition space, which are then projected onto a ternary diagram to form the map. Novel algorithms are developed for computation of pseudo-azeotropes in the evaporating mixture, and for calculation of the multicomponent wet-bulb temperature at a given liquid composition. A numerical continuation method is used to track the bifurcations which occur in the evaporation maps, where the composition of one component of the pre-loaded gas is the bifurcation parameter. The bifurcation diagrams can in principle be used to determine the required gas composition to produce a specific terminal composition in the liquid. A simple homotopy method is developed to track the locations of the various possible pseudo-azeotropes in the mixture. The stability of pseudo-azeotropes in the gas-phase limited case is examined using a linearized analysis of the governing equations. Algorithms for the calculation of separation boundaries in the evaporation maps are developed using an optimization-based method, as well as a method employing eigenvectors derived from the linearized analysis. The flexure of the wet-bulb temperature surface is explored, and it is shown how evaporation trajectories cross ridges and valleys, so that ridges and valleys of the surface do not coincide with separation boundaries. Finally, the assumption of gas-phase limited mass transfer is relaxed, by employing a model that includes diffusion in the liquid phase. A finite-volume method is used to solve the system of partial differential equations that results. The evaporation trajectories for the distributed model reduce to those of the lumped (gas-phase limited) model as the diffusivity in the liquid increases; under the same gas-phase conditions the permissible terminal compositions of the distributed and lumped models are the same.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of non-equilibrium group IV nanoscale alloys is critical to achieving new functionalities, such as the formation of a direct bandgap in a conventional indirect bandgap elemental semiconductor. Here, we describe the fabrication of uniform diameter, direct bandgap Ge1-xSnx alloy nanowires, with a Sn incorporation up to 9.2[thinsp]at.%, far in excess of the equilibrium solubility of Sn in bulk Ge, through a conventional catalytic bottom-up growth paradigm using noble metal and metal alloy catalysts. Metal alloy catalysts permitted a greater inclusion of Sn in Ge nanowires compared with conventional Au catalysts, when used during vapour-liquid-solid growth. The addition of an annealing step close to the Ge-Sn eutectic temperature (230[thinsp][deg]C) during cool-down, further facilitated the excessive dissolution of Sn in the nanowires. Sn was distributed throughout the Ge nanowire lattice with no metallic Sn segregation or precipitation at the surface or within the bulk of the nanowires. The non-equilibrium incorporation of Sn into the Ge nanowires can be understood in terms of a kinetic trapping model for impurity incorporation at the triple-phase boundary during growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.