917 resultados para constant curvature
Resumo:
We compute how bulk loops renormalize both bulk and brane effective interactions for codimension-two branes in 6D gauged chiral supergravity, as functions of the brane tension and brane-localized flux. We do so by explicitly integrating out hyper- and gauge-multiplets in 6D gauged chiral supergravity compactified to 4D on a flux-stabilized 2D rugby-ball geometry, specializing the results of a companion paper, arXiv:1210.3753 , to the supersymmetric case. While the brane back-reaction generically breaks supersymmetry, we show that the bulk supersymmetry can be preserved if the amount of brane- localized flux is related in a specific BPS-like way to the brane tension, and verify that the loop corrections to the brane curvature vanish in this special case. In these systems it is the brane-bulk couplings that fix the size of the extra dimensions, and we show that in some circumstances the bulk geometry dynamically adjusts to ensure the supersymmetric BPS-like condition is automatically satisfied. We investigate the robustness of this residual supersymmetry to loops of non-supersymmetric matter on the branes, and show that supersymmetry- breaking effects can enter only through effective brane-bulk interactions involving at least two derivatives. We comment on the relevance of this calculation to proposed applications of codimension-two 6D models to solutions of the hierarchy and cosmological constant problems. © 2013 SISSA.
Resumo:
This study focuses the export performance of the 2004 EU enlargement economies between 1990 and 2013. The long time span analysed allows to capture different stages in the relationship of these new members with the EU before and after accession. The study is based on the Constant Market Share methodology of decomposing an ex-post country’s export performance into different effects. Two different Constant Market Share Analysis (CMSA) were selected in order to disentangle, for the exports of the new members to the EU15, (i) the growth rate of exports and (ii) the growth rate of exports relatively to the world. Both approaches are applied to manufactured products first without disaggregating results by sectors and then grouping all products into two different classification of sectors: one considering the technological intensity of manufactured exports and another evaluating the specialization factors of the products exported. Results provide information not only on the ten economies’ export performance as a group but also individually considered and on the importance of each EU15 destination market to the export performance of these countries.
Resumo:
Traditionally, densities of newly built roadways are checked by direct sampling (cores) or by nuclear density gauge measurements. For roadway engineers, density of asphalt pavement surfaces is essential to determine pavement quality. Unfortunately, field measurements of density by direct sampling or by nuclear measurement are slow processes. Therefore, I have explored the use of rapidly-deployed ground penetrating radar (GPR) as an alternative means of determining pavement quality. The dielectric constant of pavement surface may be a substructure parameter that correlates with pavement density, and can be used as a proxy when density of asphalt is not known from nuclear or destructive methods. The dielectric constant of the asphalt can be determined using ground penetrating radar (GPR). In order to use GPR for evaluation of road surface quality, the relationship between dielectric constants of asphalt and their densities must be established. Field measurements of GPR were taken at four highway sites in Houghton and Keweenaw Counties, Michigan, where density values were also obtained using nuclear methods in the field. Laboratory studies involved asphalt samples taken from the field sites and samples created in the laboratory. These were tested in various ways, including, density, thickness, and time domain reflectometry (TDR). In the field, GPR data was acquired using a 1000 MHz air-launched unit and a ground-coupled unit at 200 and 500 MHz. The equipment used was owned and operated by the Michigan Department of Transportation (MDOT) and available for this study for a total of four days during summer 2005 and spring 2006. The analysis of the reflected waveforms included “routine” processing for velocity using commercial software and direct evaluation of reflection coefficients to determine a dielectric constant. The dielectric constants computed from velocities do not agree well with those obtained from reflection coefficients. Perhaps due to the limited range of asphalt types studied, no correlation between density and dielectric constant was evident. Laboratory measurements were taken with samples removed from the field and samples created for this study. Samples from the field were studied using TDR, in order to obtain dielectric constant directly, and these correlated well with the estimates made from reflection coefficients. Samples created in the laboratory were measured using 1000 MHz air-launched GPR, and 400 MHz ground-coupled GPR, each under both wet and dry conditions. On the basis of these observations, I conclude that dielectric constant of asphalt can be reliably measured from waveform amplitude analysis of GJPR data, based on the consistent agreement with that obtained in the laboratory using TDR. Because of the uniformity of asphalts studied here, any correlation between dielectric constant and density is not yet apparent.
Resumo:
In a previous contribution [Appl. Opt. 51, 8599 (2012)], a coauthor of this work presented a method for reconstructing the wavefront aberration from tangential refractive power data measured using dynamic skiascopy. Here we propose a new regularized least squares method where the wavefront is reconstructed not only using tangential but also sagittal curvature data. We prove that our new method provides improved quality reconstruction for typical and also for highly aberrated wavefronts, under a wide range of experimental error levels. Our method may be applied to any type of wavefront sensor (not only dynamic skiascopy) able to measure either just tangential or tangential plus sagittal curvature data.
Resumo:
Thermoelectric generators (TEGs) are solid-state devices that can be used for the direct conversion between heat and electricity. These devices are an attractive option for generating clean energy from heat. There are two modes of operation for TEGs; constant heat and constant temperature. It is a well-known fact that for constant temperature operation, TEGs have a maximum power point lying at half the open circuit voltage of the TEG, for a particular temperature. This work aimed to investigate the position of the maximum power point for Bismuth Telluride TEGs working under constant heat conditions i.e. the heat supply to the TEG is fixed however the temperature across the TEG can vary depending upon its operating conditions. It was found that for constant heat operation, the maximum power point for a TEG is greater than half the open circuit voltage of the TEG.
Resumo:
Our goal in this paper is to extend previous results obtained for Newtonian and secondgrade fluids to third-grade fluids in the case of an axisymmetric, straight, rigid and impermeable tube with constant cross-section using a one-dimensional hierarchical model based on the Cosserat theory related to fluid dynamics. In this way we can reduce the full threedimensional system of equations for the axisymmetric unsteady motion of a non-Newtonian incompressible third-grade fluid to a system of equations depending on time and on a single spatial variable. Some numerical simulations for the volume flow rate and the the wall shear stress are presented.
Resumo:
We prove a Theorem on homotheties between two given tangent sphere bundles SrM of a Riemannian manifold (M,g) of dim ≥ 3, assuming different variable radius functions r and weighted Sasaki metrics induced by the conformal class of g. New examples are shown of manifolds with constant positive or with constant negative scalar curvature which are not Einstein. Recalling results on the associated almost complex structure I^G and symplectic structure ω^G on the manifold TM , generalizing the well-known structure of Sasaki by admitting weights and connections with torsion, we compute the Chern and the Stiefel-Whitney characteristic classes of the manifolds TM and SrM.
Resumo:
Terceiro maior produtor de frutas frescas do mundo, o Brasil se destaca no mercado agrícola por apresentar um clima tropical favorável à produção de diversas frutas. O melão e a manga são exemplos de frutas frescas que apresentam grandes índices de exportação. Os estados do Ceará e do Rio Grande do Norte são responsáveis pela maior parte da produção do melão brasileiro, já o mercado da União Europeia, é responsável quase que pela totalidade da importação do melão brasileiro. O objetivo desta pesquisa é analisar a competitividade e as parcelas de mercado do melão brasileiro no mercado mundial, no período de 2003 a 2011, tomando como base o modelo Constant Market Share. Os resultados mostram a diferença de direção dos subperíodos analisados. No primeiro subperíodo, têm-se o crescimento da exportação ocasionado pelo crescimento do comércio mundial e pelo fator competitividade, diferente do segundo período em que há uma queda principalmente na competitividade ocasionando o declínio na exportação da fruta produzida no Brasil
Resumo:
The thermal behaviour of halloysite fully expanded with hydrazine-hydrate has been investigated in nitrogen atmosphere under dynamic heating and at a constant, pre-set decomposition rate of 0.15 mg min-1. Under controlled-rate thermal analysis (CRTA) conditions it was possible to resolve the closely overlapping decomposition stages and to distinguish between adsorbed and bonded reagent. Three types of bonded reagent could be identified. The loosely bonded reagent amounting to 0.20 mol hydrazine-hydrate per mol inner surface hydroxyl is connected to the internal and external surfaces of the expanded mineral and is present as a space filler between the sheets of the delaminated mineral. The strongly bonded (intercalated) hydrazine-hydrate is connected to the kaolinite inner surface OH groups by the formation of hydrogen bonds. Based on the thermoanalytical results two different types of bonded reagent could be distinguished in the complex. Type 1 reagent (approx. 0.06 mol hydrazine-hydrate/mol inner surface OH) is liberated between 77 and 103°C. Type 2 reagent is lost between 103 and 227°C, corresponding to a quantity of 0.36 mol hydrazine/mol inner surface OH. When heating the complex to 77°C under CRTA conditions a new reflection appears in the XRD pattern with a d-value of 9.6 Å, in addition to the 10.2 Ĺ reflection. This new reflection disappears in contact with moist air and the complex re-expands to the original d-value of 10.2 Å in a few h. The appearance of the 9.6 Å reflection is interpreted as the expansion of kaolinite with hydrazine alone, while the 10.2 Å one is due to expansion with hydrazine-hydrate. FTIR (DRIFT) spectroscopic results showed that the treated mineral after intercalation/deintercalation and heat treatment to 300°C is slightly more ordered than the original (untreated) clay.
Resumo:
A new method for estimating the time to colonization of Methicillin-resistant Staphylococcus Aureus (MRSA) patients is developed in this paper. The time to colonization of MRSA is modelled using a Bayesian smoothing approach for the hazard function. There are two prior models discussed in this paper: the first difference prior and the second difference prior. The second difference prior model gives smoother estimates of the hazard functions and, when applied to data from an intensive care unit (ICU), clearly shows increasing hazard up to day 13, then a decreasing hazard. The results clearly demonstrate that the hazard is not constant and provide a useful quantification of the effect of length of stay on the risk of MRSA colonization which provides useful insight.
Resumo:
The dynamic interaction between building systems and external climate is extremely complex, involving a large number of difficult-to-predict variables. In order to study the impact of global warming on the built environment, the use of building simulation techniques together with forecast weather data are often necessary. Since all building simulation programs require hourly meteorological input data for their thermal comfort and energy evaluation, the provision of suitable weather data becomes critical. Based on a review of the existing weather data generation models, this paper presents an effective method to generate approximate future hourly weather data suitable for the study of the impact of global warming. Depending on the level of information available for the prediction of future weather condition, it is shown that either the method of retaining to current level, constant offset method or diurnal modelling method may be used to generate the future hourly variation of an individual weather parameter. An example of the application of this method to the different global warming scenarios in Australia is presented. Since there is no reliable projection of possible change in air humidity, solar radiation or wind characters, as a first approximation, these parameters have been assumed to remain at the current level. A sensitivity test of their impact on the building energy performance shows that there is generally a good linear relationship between building cooling load and the changes of weather variables of solar radiation, relative humidity or wind speed.
Resumo:
The release of ultrafine particles (UFP) from laser printers and office equipment was analyzed using a particle counter (FMPS; Fast Mobility Particle Sizer) with a high time resolution, as well as the appropriate mathematical models. Measurements were carried out in a 1 m³ chamber, a 24 m³ chamber and an office. The time-dependent emission rates were calculated for these environments using a deconvolution model, after which the total amount of emitted particles was calculated. The total amounts of released particles were found to be independent of the environmental parameters and therefore, in principle, they were appropriate for the comparison of different printers. On the basis of the time-dependent emission rates, “initial burst” emitters and constant emitters could also be distinguished. In the case of an “initial burst” emitter, the comparison to other devices is generally affected by strong variations between individual measurements. When conducting exposure assessments for UFP in an office, the spatial distribution of the particles also had to be considered. In this work, the spatial distribution was predicted on a case by case basis, using CFD simulation.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Aim: To review the titles, roles and scope of practice of Advanced Practice Nurses internationally.----- Background: There is a worldwide shortage of nurses but there is also an increased demand for nurses with enhanced skills who can manage a more diverse, complex and acutely ill patient population than ever before. As a result, a variety of nurses in advanced practice positions has evolved around the world. The differences in nomenclature have led to confusion over the roles, scope of practice and professional boundaries of nurses in an international context.----- Method: CINAHL, Medline, and the Cochrane database of Systematic Reviews were searched from 1987 to 2008. Information was also obtained through government health and professional organisation websites. All information in the literature regarding current and past status, and nomenclature of advanced practice nursing was considered relevant.----- Findings: There are many names for Advanced Practice Nurses, and although many of these roles are similar in their function, they can often have different titles.----- Conclusion: Advanced Practice Nurses are critical for the future, provide cost-effective care and are highly regarded by patients/clients. They will be a constant and permanent feature of future health care provision. However, clarification regarding their classification and regulation is necessary in some countries.