937 resultados para Bose-Einstein condensation statistical model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the bilayer pre-transition exhibited by some lipids at temperatures below their main phase transition, and which is generally associated to the formation of periodic ripples in the membrane. Experimentally we focus on the anionic lipid dipalmytoylphosphatidylglycerol (DPPG) at different ionic strengths, and on the neutral lipid dipalmytoylphosphatidylcholine (DPPC). From the analysis of differential scanning calorimetry traces of the two lipids we find that both pre- and main transitions are part of the same melting process. Electron spin resonance of spin labels and excitation generalized polarization of Laurdan reveal the coexistence of gel and fluid domains at temperatures between the pre- and main transitions of both lipids, reinforcing the first finding. Also, the melting process of DPPG at low ionic strength is found to be less cooperative than that of DPPC. From the theoretical side, we introduce a statistical model in which a next-nearest-neighbor competing interaction is added to the usual two-state model. For the first time, modulated phases (ordered and disordered lipids periodically aligned) emerge between the gel and fluid phases as a natural consequence of the competition between lipid-lipid interactions. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Letter we deal with a nonlinear Schrodinger equation with chaotic, random, and nonperiodic cubic nonlinearity. Our goal is to study the soliton evolution, with the strength of the nonlinearity perturbed in the space and time coordinates and to check its robustness under these conditions. Here we show that the chaotic perturbation is more effective in destroying the soliton behavior, when compared with random or nonperiodic perturbation. For a real system, the perturbation can be related to, e.g., impurities in crystalline structures, or coupling to a thermal reservoir which, on the average, enhances the nonlinearity. We also discuss the relevance of such random perturbations to the dynamics of Bose-Einstein condensates and their collective excitations and transport. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reactions induced by the weakly bound (6)Li projectile interacting with the intermediate mass target (59)Co were investigated. Light charged particles singles and alpha-d coincidence measurements were performed at the near barrier energies E(lab) = 17.4, 21.5, 25.5 and 29.6 MeV. The main contributions of the different competing mechanisms are discussed. A statistical model analysis. Continuum-Discretized Coupled-Channels (CDCC) calculations and two-body kinematics were used as tools to provide information to disentangle the main components of these mechanisms. A significant contribution of the direct breakup was observed through the difference between the experimental sequential breakup cross section and the CDCC prediction for the non-capture breakup cross section. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of resonant generation of nonground-state condensates is addressed aiming at resolving the seeming paradox that arises when one resorts to the adiabatic representation. In this picture, the eigenvalues and eigenfunctions of a time-dependent Gross-Pitaevskii Hamiltonian are also functions of time. Since the level energies vary in time, no definite transition frequency can be introduced. Hence no external modulation with a fixed frequency can be made resonant. Thus, the resonant generation of adiabatic coherent modes is impossible. However, this paradox occurs only in the frame of the adiabatic picture. It is shown that no paradox exists in the properly formulated diabatic representation. The resonant generation of diabatic coherent modes is a well defined phenomenon. As an example, the equations are derived, describing the generation of diabatic coherent modes by the combined resonant modulation of the trapping potential and atomic scattering length.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Local influence diagnostics based on estimating equations as the role of a gradient vector derived from any fit function are developed for repeated measures regression analysis. Our proposal generalizes tools used in other studies (Cook, 1986: Cadigan and Farrell, 2002), considering herein local influence diagnostics for a statistical model where estimation involves an estimating equation in which all observations are not necessarily independent of each other. Moreover, the measures of local influence are illustrated with some simulated data sets to assess influential observations. Applications using real data are presented. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Allyl 1-naphthyl ethers are useful compounds for different purposes, but reported methods to synthesize them require long reaction times. In this work, we have obtained allyl 1-naphthyl ether in good yield using ultrasonic-assisted methodology in a 1-h reaction. A central composite design was used to obtain a statistical model and a response surface (p < 0.05; R(2) = 0.970; R(adj)(2) = 0.949; R(pred)(2) = 0.818) that can predict the optimal conditions to maximize the yield, validated experimentally. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this project, two broad facets in the design of a methodology for performance optimization of indexable carbide inserts were examined. They were physical destructive testing and software simulation.For the physical testing, statistical research techniques were used for the design of the methodology. A five step method which began with Problem definition, through System identification, Statistical model formation, Data collection and Statistical analyses and results was indepthly elaborated upon. Set-up and execution of an experiment with a compression machine together with roadblocks and possible solution to curb road blocks to quality data collection were examined. 2k factorial design was illustrated and recommended for process improvement. Instances of first-order and second-order response surface analyses were encountered. In the case of curvature, test for curvature significance with center point analysis was recommended. Process optimization with method of steepest ascent and central composite design or process robustness studies of response surface analyses were also recommended.For the simulation test, AdvantEdge program was identified as the most used software for tool development. Challenges to the efficient application of this software were identified and possible solutions proposed. In conclusion, software simulation and physical testing were recommended to meet the objective of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the study was to see if any relationship between government spending andunemployment could be empirically found. To test if government spending affectsunemployment, a statistical model was applied on data from Sweden. The data was quarterlydata from the year 1994 until 2012, unit-root test were conducted and the variables wheretransformed to its first-difference so ensure stationarity. This transformation changed thevariables to growth rates. This meant that the interpretation deviated a little from the originalgoal. Other studies reviewed indicate that when government spending increases and/or taxesdecreases output increases. Studies show that unemployment decreases when governmentspending/GDP ratio increases. Some studies also indicated that with an already largegovernment sector increasing the spending it could have negative effect on output. The modelwas a VAR-model with unemployment, output, interest rate, taxes and government spending.Also included in the model were a linear and three quarterly dummies. The model used 7lags. The result was not statistically significant for most lags but indicated that as governmentspending growth rate increases holding everything else constant unemployment growth rateincreases. The result for taxes was even less statistically significant and indicates norelationship with unemployment. Post-estimation test indicates that there were problems withnon-normality in the model. So the results should be interpreted with some scepticism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the quantum dynamics of a two-mode Bose-Einstein condensate in a time-dependent symmetric double-well potential using analytical and numerical methods. The effects of internal degrees of freedom on the visibility of interference fringes during a stage of ballistic expansion are investigated varying particle number, nonlinear interaction sign and strength, as well as tunneling coupling. Expressions for the phase resolution are derived and the possible enhancement due to squeezing is discussed. In particular, the role of the superfluid-Mott insulator crossover and its analog for attractive interactions is recognized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regardless of the technical procedure used in signalling corporate collapse, the bottom line rests on the predictive power of the corresponding statistical model. In that regard, it is imperative to empirically test the model using a data sample of both collapsed and non-collapsed companies. A superior model is one that successfully classifies collapsed and non-collapsed companies in their respective categories with a high degree of accuracy. Empirical studies of this nature have thus far done one of two things. (1) Some have classified companies based on a specific statistical modelling process. (2) Some have classified companies based on two (sometimes – but rarely – more than two) independent statistical modelling processes for the purposes of comparing one with the other. In the latter case, the mindset of the researchers has been – invariably – to pitch one procedure against the other. This paper raises the question, why pitch one statistical process against another; why not make the two procedures work together? As such, this paper puts forward an innovative dual-classification scheme for signalling corporate collapse: dual in the sense that it relies on two statistical procedures concurrently. Using a data sample of Australian publicly listed companies, the proposed scheme is tested against the traditional approach taken thus far in the pertinent literature. The results demonstrate that the proposed dual-classification scheme signals collapse with a higher degree of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a ‘black box’ and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws. In industry-based studies, known as ‘process analyses’, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, ‘input-output analysis’, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable. Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions. Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects. A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a ‘black box’, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, ‘direct energy paths’). A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a ‘product quantity’ (for example, the total tonnes of cement used) and a ‘direct energy intensity’ (for example, the energy required directly for cement manufacture, per tonne). The input-output model was decomposed into direct energy paths for the ‘residential building construction’ sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for ‘residential building construction’. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects. A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because: • typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity); • of the complexity of the paths derived for ‘residential building construction’; and • of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities. While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for ‘residential building construction’. Consequently, both the value and relative importance of the direct energy paths for ‘residential building construction’ were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of products—even in non-standard cases. ‘Residential building construction’ was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for ‘residential building construction’. For more complex sectors than ‘residential building construction’, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component). To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, ‘banking’) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor). However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework. Plain language summary Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated. This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a ‘black box’ subject to errors. A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements. These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latin-american countries passed from predominantely rural to predominantely urban within few decades. The level of urbanisation in Brazil progressed from 36% in 1950, 50% in 1970, and scalating to 85% in 2005. This rapid transformation resulted in many social problems, as cities were not able to provide appropriate housing and infrastructure for the growing population. As a response, the Brazilian Ministry for Cities, in 2005, created the National System for Social Housing, with the goal to establish guidelines in the Federal level, and build capacity and fund social housing projects in the State and Local levels. This paper presents a research developed in Gramado city, Brazil, as part of the Local Social Housing Plan process, with the goal to produce innovative tools to help social housing planning and management. It proposes and test a methodology to locate and characterise/rank housing defficiencies across the city combining GIS and fractal geometry analysis. Fractal measurements, such as fractal dimension and lacunarity, are able to differentiate urban morphology, and integrated to infrastructure and socio-economical spatial indicators, they can be used to estimate housing problems and help to target, classify and schedule actions to improve housing in cities and regions. Gramado city was divided in a grid with 1,000 cells. For each cell, the following indicators were measured: average income of households, % of roads length which are paved (as a proxy for availability of infrastructures as water and sewage), fractal dimension and lacunarity of the dwellings spatial distribution. A statistical model combining those measurements was produced using a sample of 10% of the cells divided in five housing standards (from high income/low density dwellings to slum's dwellings). The estimation of the location and level of social housing deficiencies in the whole region using the model, compared to the real situation, achived high correlations. Simple and based on easily accessible and inexpensive data, the method also helped to overcome limitations of lack of information and fragmented knowledge of the area related to housing conditions by local professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method for foreground/background separation of audio using a background modelling technique. The technique models the background in an online, unsupervised, and adaptive fashion, and is designed for application to long term surveillance and monitoring problems. The background is determined using a statistical method to model the states of the audio over time. In addition, three methods are used to increase the accuracy of background modelling in complex audio environments. Such environments can cause the failure of the statistical model to accurately capture the background states. An entropy-based approach is used to unify background representations fragmented over multiple states of the statistical model. The approach successfully unifies such background states, resulting in a more robust background model. We adaptively adjust the number of states considered background according to background complexity, resulting in the more accurate classification of background models. Finally, we use an auxiliary model cache to retain potential background states in the system. This prevents the deletion of such states due to a rapid influx of observed states that can occur for highly dynamic sections of the audio signal. The separation algorithm was successfully applied to a number of audio environments representing monitoring applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species that have temperature-dependent sex determination (TSD) often produce highly skewed offspring sex ratios contrary to long-standing theoretical predictions. This ecological enigma has provoked concern that climate change may induce the production of single-sex generations and hence lead to population extirpation. All species of sea turtles exhibit TSD, many are already endangered, and most already produce sex ratios skewed to the sex produced at warmer temperatures (females). We tracked male loggerhead turtles (Caretta caretta) from Zakynthos, Greece, throughout the entire interval between successive breeding seasons and identified individuals on their breeding grounds, using photoidentification, to determine breeding periodicity and operational sex ratios. Males returned to breed at least twice as frequently as females. We estimated that the hatchling sex ratio of 70:30 female to male for this rookery will translate into an overall operational sex ratio (OSR) (i.e., ratio of total number of males vs females breeding each year) of close to 50:50 female to male. We followed three male turtles for between 10 and 12 months during which time they all traveled back to the breeding grounds. Flipper tagging revealed the proportion of females returning to nest after intervals of 1, 2, 3, and 4 years were 0.21, 0.38, 0.29, and 0.12, respectively (mean interval 2.3 years). A further nine male turtles were tracked for short periods to determine their departure date from the breeding grounds. These departure dates were combined with a photoidentification data set of 165 individuals identified on in-water transect surveys at the start of the breeding season to develop a statistical model of the population dynamics. This model produced a maximum likelihood estimate that males visit the breeding site 2.6 times more often than females (95%CI 2.1, 3.1), which was consistent with the data from satellite tracking and flipper tagging. Increased frequency of male breeding will help ameliorate female-biased hatchling sex ratios. Combined with the ability of males to fertilize the eggs of many females and for females to store sperm to fertilize many clutches, our results imply that effects of climate change on the viability of sea turtle populations are likely to be less acute than previously suspected.