956 resultados para Madelung constant
Resumo:
Forced convection heat transfer in a micro-channel filled with a porous material saturated with rarefied gas with internal heat generation is studied analytically in this work. The study is performed by analysing the boundary conditions for constant wall heat flux under local thermal non-equilibrium (LTNE) conditions. Invoking the velocity slip and temperature jump, the thermal behaviour of the porous-fluid system is studied by considering thermally and hydrodynamically fully-developed conditions. The flow inside the porous material is modelled by the Darcy–Brinkman equation. Exact solutions are obtained for both the fluid and solid temperature distributions for two primary approaches models A and B using constant wall heat flux boundary conditions. The temperature distributions and Nusselt numbers for models A and B are compared, and the limiting cases resulting in the convergence or divergence of the two models are also discussed. The effects of pertinent parameters such as fluid to solid effective thermal conductivity ratio, Biot number, Darcy number, velocity slip and temperature jump coefficients, and fluid and solid internal heat generations are also discussed. The results indicate that the Nusselt number decreases with the increase of thermal conductivity ratio for both models. This contrasts results from previous studies which for model A reported that the Nusselt number increases with the increase of thermal conductivity ratio. The Biot number and thermal conductivity ratio are found to have substantial effects on the role of temperature jump coefficient in controlling the Nusselt number for models A and B. The Nusselt numbers calculated using model A change drastically with the variation of solid internal heat generation. In contrast, the Nusselt numbers obtained for model B show a weak dependency on the variation of internal heat generation. The velocity slip coefficient has no noticeable effect on the Nusselt numbers for both models. The difference between the Nusselt numbers calculated using the two models decreases with an increase of the temperature jump coefficient.
Resumo:
Not Available
Resumo:
The temperature of the mantle and the rate of melt production are parameters which play important roles in controlling the style of crustal accretion along mid-ocean ridges. To investigate the variability in crustal accretion that develops in response to variations in mantle temperature, we have conducted a geophysical investigation of the Southeast Indian Ridge (SEIR) between the Amsterdam hotspot and the Australian-Antarctic Discordance (88 degrees E-118 degrees E). The spreading center deepens by 2100 m from west to east within the study area. Despite a uniform, intermediate spreading rate (69-75 mm yr-l), the SEIR exhibits the range in axial morphology displayed by the East Pacific Rise and the Mid-Atlantic Ridge (MAR) and usually associated with variations in spreading rate. The spreading center is characterized by an axial high west of 102 degrees 45'E, whereas an axial valley is prevalent east of this longitude. Both the deepening of the ridge axis and the general evolution of axial morphology from an axial high to a rift valley are not uniform. A region of intermediate morphology separates axial highs and MAR-like rift valleys. Local transitions in axial morphology occur in three areas along the ridge axis. The increase in axial depth toward the Australian-Antarctic Discordance may be explained by the thinning of the oceanic crust by similar to 4 km and the change in axial topography. The long-wavelength changes observed along the SEIR can be attributed to a gradient in mantle temperature between regions influenced by the Amsterdam and Kerguelen hot spots and the Australian-Antarctic Discordance. However, local processes, perhaps associated with an heterogeneous mantle or along-axis asthenospheric flow, may give rise to local transitions in axial topography and depth anomalies.
Resumo:
This study focuses the export performance of the 2004 EU enlargement economies between 1990 and 2013. The long time span analysed allows to capture different stages in the relationship of these new members with the EU before and after accession. The study is based on the Constant Market Share methodology of decomposing an ex-post country’s export performance into different effects. Two different Constant Market Share Analysis (CMSA) were selected in order to disentangle, for the exports of the new members to the EU15, (i) the growth rate of exports and (ii) the growth rate of exports relatively to the world. Both approaches are applied to manufactured products first without disaggregating results by sectors and then grouping all products into two different classification of sectors: one considering the technological intensity of manufactured exports and another evaluating the specialization factors of the products exported. Results provide information not only on the ten economies’ export performance as a group but also individually considered and on the importance of each EU15 destination market to the export performance of these countries.
Resumo:
Traditionally, densities of newly built roadways are checked by direct sampling (cores) or by nuclear density gauge measurements. For roadway engineers, density of asphalt pavement surfaces is essential to determine pavement quality. Unfortunately, field measurements of density by direct sampling or by nuclear measurement are slow processes. Therefore, I have explored the use of rapidly-deployed ground penetrating radar (GPR) as an alternative means of determining pavement quality. The dielectric constant of pavement surface may be a substructure parameter that correlates with pavement density, and can be used as a proxy when density of asphalt is not known from nuclear or destructive methods. The dielectric constant of the asphalt can be determined using ground penetrating radar (GPR). In order to use GPR for evaluation of road surface quality, the relationship between dielectric constants of asphalt and their densities must be established. Field measurements of GPR were taken at four highway sites in Houghton and Keweenaw Counties, Michigan, where density values were also obtained using nuclear methods in the field. Laboratory studies involved asphalt samples taken from the field sites and samples created in the laboratory. These were tested in various ways, including, density, thickness, and time domain reflectometry (TDR). In the field, GPR data was acquired using a 1000 MHz air-launched unit and a ground-coupled unit at 200 and 500 MHz. The equipment used was owned and operated by the Michigan Department of Transportation (MDOT) and available for this study for a total of four days during summer 2005 and spring 2006. The analysis of the reflected waveforms included “routine” processing for velocity using commercial software and direct evaluation of reflection coefficients to determine a dielectric constant. The dielectric constants computed from velocities do not agree well with those obtained from reflection coefficients. Perhaps due to the limited range of asphalt types studied, no correlation between density and dielectric constant was evident. Laboratory measurements were taken with samples removed from the field and samples created for this study. Samples from the field were studied using TDR, in order to obtain dielectric constant directly, and these correlated well with the estimates made from reflection coefficients. Samples created in the laboratory were measured using 1000 MHz air-launched GPR, and 400 MHz ground-coupled GPR, each under both wet and dry conditions. On the basis of these observations, I conclude that dielectric constant of asphalt can be reliably measured from waveform amplitude analysis of GJPR data, based on the consistent agreement with that obtained in the laboratory using TDR. Because of the uniformity of asphalts studied here, any correlation between dielectric constant and density is not yet apparent.
Resumo:
Thermoelectric generators (TEGs) are solid-state devices that can be used for the direct conversion between heat and electricity. These devices are an attractive option for generating clean energy from heat. There are two modes of operation for TEGs; constant heat and constant temperature. It is a well-known fact that for constant temperature operation, TEGs have a maximum power point lying at half the open circuit voltage of the TEG, for a particular temperature. This work aimed to investigate the position of the maximum power point for Bismuth Telluride TEGs working under constant heat conditions i.e. the heat supply to the TEG is fixed however the temperature across the TEG can vary depending upon its operating conditions. It was found that for constant heat operation, the maximum power point for a TEG is greater than half the open circuit voltage of the TEG.
Resumo:
Our goal in this paper is to extend previous results obtained for Newtonian and secondgrade fluids to third-grade fluids in the case of an axisymmetric, straight, rigid and impermeable tube with constant cross-section using a one-dimensional hierarchical model based on the Cosserat theory related to fluid dynamics. In this way we can reduce the full threedimensional system of equations for the axisymmetric unsteady motion of a non-Newtonian incompressible third-grade fluid to a system of equations depending on time and on a single spatial variable. Some numerical simulations for the volume flow rate and the the wall shear stress are presented.
Resumo:
Terceiro maior produtor de frutas frescas do mundo, o Brasil se destaca no mercado agrícola por apresentar um clima tropical favorável à produção de diversas frutas. O melão e a manga são exemplos de frutas frescas que apresentam grandes índices de exportação. Os estados do Ceará e do Rio Grande do Norte são responsáveis pela maior parte da produção do melão brasileiro, já o mercado da União Europeia, é responsável quase que pela totalidade da importação do melão brasileiro. O objetivo desta pesquisa é analisar a competitividade e as parcelas de mercado do melão brasileiro no mercado mundial, no período de 2003 a 2011, tomando como base o modelo Constant Market Share. Os resultados mostram a diferença de direção dos subperíodos analisados. No primeiro subperíodo, têm-se o crescimento da exportação ocasionado pelo crescimento do comércio mundial e pelo fator competitividade, diferente do segundo período em que há uma queda principalmente na competitividade ocasionando o declínio na exportação da fruta produzida no Brasil
Resumo:
The thermal behaviour of halloysite fully expanded with hydrazine-hydrate has been investigated in nitrogen atmosphere under dynamic heating and at a constant, pre-set decomposition rate of 0.15 mg min-1. Under controlled-rate thermal analysis (CRTA) conditions it was possible to resolve the closely overlapping decomposition stages and to distinguish between adsorbed and bonded reagent. Three types of bonded reagent could be identified. The loosely bonded reagent amounting to 0.20 mol hydrazine-hydrate per mol inner surface hydroxyl is connected to the internal and external surfaces of the expanded mineral and is present as a space filler between the sheets of the delaminated mineral. The strongly bonded (intercalated) hydrazine-hydrate is connected to the kaolinite inner surface OH groups by the formation of hydrogen bonds. Based on the thermoanalytical results two different types of bonded reagent could be distinguished in the complex. Type 1 reagent (approx. 0.06 mol hydrazine-hydrate/mol inner surface OH) is liberated between 77 and 103°C. Type 2 reagent is lost between 103 and 227°C, corresponding to a quantity of 0.36 mol hydrazine/mol inner surface OH. When heating the complex to 77°C under CRTA conditions a new reflection appears in the XRD pattern with a d-value of 9.6 Å, in addition to the 10.2 Ĺ reflection. This new reflection disappears in contact with moist air and the complex re-expands to the original d-value of 10.2 Å in a few h. The appearance of the 9.6 Å reflection is interpreted as the expansion of kaolinite with hydrazine alone, while the 10.2 Å one is due to expansion with hydrazine-hydrate. FTIR (DRIFT) spectroscopic results showed that the treated mineral after intercalation/deintercalation and heat treatment to 300°C is slightly more ordered than the original (untreated) clay.
Resumo:
A new method for estimating the time to colonization of Methicillin-resistant Staphylococcus Aureus (MRSA) patients is developed in this paper. The time to colonization of MRSA is modelled using a Bayesian smoothing approach for the hazard function. There are two prior models discussed in this paper: the first difference prior and the second difference prior. The second difference prior model gives smoother estimates of the hazard functions and, when applied to data from an intensive care unit (ICU), clearly shows increasing hazard up to day 13, then a decreasing hazard. The results clearly demonstrate that the hazard is not constant and provide a useful quantification of the effect of length of stay on the risk of MRSA colonization which provides useful insight.
Resumo:
The dynamic interaction between building systems and external climate is extremely complex, involving a large number of difficult-to-predict variables. In order to study the impact of global warming on the built environment, the use of building simulation techniques together with forecast weather data are often necessary. Since all building simulation programs require hourly meteorological input data for their thermal comfort and energy evaluation, the provision of suitable weather data becomes critical. Based on a review of the existing weather data generation models, this paper presents an effective method to generate approximate future hourly weather data suitable for the study of the impact of global warming. Depending on the level of information available for the prediction of future weather condition, it is shown that either the method of retaining to current level, constant offset method or diurnal modelling method may be used to generate the future hourly variation of an individual weather parameter. An example of the application of this method to the different global warming scenarios in Australia is presented. Since there is no reliable projection of possible change in air humidity, solar radiation or wind characters, as a first approximation, these parameters have been assumed to remain at the current level. A sensitivity test of their impact on the building energy performance shows that there is generally a good linear relationship between building cooling load and the changes of weather variables of solar radiation, relative humidity or wind speed.
Resumo:
The release of ultrafine particles (UFP) from laser printers and office equipment was analyzed using a particle counter (FMPS; Fast Mobility Particle Sizer) with a high time resolution, as well as the appropriate mathematical models. Measurements were carried out in a 1 m³ chamber, a 24 m³ chamber and an office. The time-dependent emission rates were calculated for these environments using a deconvolution model, after which the total amount of emitted particles was calculated. The total amounts of released particles were found to be independent of the environmental parameters and therefore, in principle, they were appropriate for the comparison of different printers. On the basis of the time-dependent emission rates, “initial burst” emitters and constant emitters could also be distinguished. In the case of an “initial burst” emitter, the comparison to other devices is generally affected by strong variations between individual measurements. When conducting exposure assessments for UFP in an office, the spatial distribution of the particles also had to be considered. In this work, the spatial distribution was predicted on a case by case basis, using CFD simulation.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Aim: To review the titles, roles and scope of practice of Advanced Practice Nurses internationally.----- Background: There is a worldwide shortage of nurses but there is also an increased demand for nurses with enhanced skills who can manage a more diverse, complex and acutely ill patient population than ever before. As a result, a variety of nurses in advanced practice positions has evolved around the world. The differences in nomenclature have led to confusion over the roles, scope of practice and professional boundaries of nurses in an international context.----- Method: CINAHL, Medline, and the Cochrane database of Systematic Reviews were searched from 1987 to 2008. Information was also obtained through government health and professional organisation websites. All information in the literature regarding current and past status, and nomenclature of advanced practice nursing was considered relevant.----- Findings: There are many names for Advanced Practice Nurses, and although many of these roles are similar in their function, they can often have different titles.----- Conclusion: Advanced Practice Nurses are critical for the future, provide cost-effective care and are highly regarded by patients/clients. They will be a constant and permanent feature of future health care provision. However, clarification regarding their classification and regulation is necessary in some countries.