939 resultados para Time-variable gravity
Resumo:
A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.
Resumo:
This paper presents a Variable neighbourhood search (VNS) approach for solving the Maximum Set Splitting Problem (MSSP). The algorithm forms a system of neighborhoods based on changing the component for an increasing number of elements. An efficient local search procedure swaps the components of pairs of elements and yields a relatively short running time. Numerical experiments are performed on the instances known in the literature: minimum hitting set and Steiner triple systems. Computational results show that the proposed VNS achieves all optimal or best known solutions in short times. The experiments indicate that the VNS compares favorably with other methods previously used for solving the MSSP. ACM Computing Classification System (1998): I.2.8.
Resumo:
In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
Chronic bronchopulmonary bacterial infections remain the most common cause of morbidity and mortality among patients with cystic fibrosis (CF). Recent community sequencing work has now shown that the bacterial community in the CF lung is polymicrobial. Identifying bacteria in the CF lung through sequencing can be costly and is not practical for many laboratories. Molecular techniques such as terminal restriction fragment length polymorphism or amplicon length heterogeneity-polymerase chain reaction (LH-PCR) can provide many laboratories with the ability to study CF bacterial communities without costly sequencing. The aim of this study was to determine if the use of LH-PCR with multiple hypervariable regions of the 16S rRNA gene could be used to identify organisms found in sputum DNA. This work also determined if LH-PCR could be used to observe the dynamics of lung infections over a period of time. Nineteen samples were analysed with the V1 and the V1_V2 region of the 16S rRNA gene. Based on the amplicon size present in the V1_V2 region, Pseudomonas aeruginosa was confirmed to be in all 19 samples obtained from the patients. The V1 region provided a higher power of discrimination between bacterial profiles of patients. Both regions were able to identify trends in the bacterial population over a period of time. LH profiles showed that the CF lung community is dynamic and that changes in the community may in part be driven by the patient's antibiotic treatment. LH-PCR is a tool that is well suited for studying bacterial communities and their dynamics.
Resumo:
This study investigated time-use of elementary music teachers and elementary classroom teachers to determine: (1) whether there was a relationship between grade level, time of day, and day of the week and teachers' time-use in teaching, monitoring, and non-curricular, and (2) whether ethnicity, training, and years of experience affect teacher time-use. Sixty-nine music teachers and 55 classroom teachers participated. ^ A MANOVA was used to examine the hypothesized relationship. ANOVA results were significant for time spent teaching, monitoring, and non-curricular. An independent t test revealed a significance difference (t (302) = 5.20, p < .001) between the two groups of teachers. A significant difference was found for teaching, t (302) = 5.20, p < .001: music teachers spent more time actively teaching than did classroom teachers. There was a significant difference for monitoring (t (302) = 13.62, p < .001): classroom teachers allocated more time to monitoring than did music teachers. A significant difference was also found for non-curricular (t (302) = 7.03, p < .001): music teachers spent more time in this category of activities than did classroom teachers. ^ Analyses of the activities subsumed under the major categories indicated significant differences between elementary music teachers and elementary classroom teachers, overall, in subject matter (p < .001), discussion (p < .05), school-wide activities (p < .001), seatwork (p < .001), giving directions (p < .001), changing activities (p < .001), lunch (p < .05), planning (p < .001) and interruption (p < .001). Analyses of the relationship and ethnicity, training, degree, experience indicated significant difference for main effect, ethnicity (F(2, 116) = 4.22, p < .017). Time-use for black non-Hispanic teachers was higher than time-use for those who were Hispanic and white non-Hispanic. ^ Analyses of time-use by grade showed no increase for either group as grade level increased. A statistically significant Wilks Lambda ( F (1,294) = .917 p < .013) was found for the independent variable day of the week. ANOVA indicated that elementary classroom teachers monitored more on Thursdays and Fridays: music teachers allocated more time to non-curricular activities on Fridays. ^
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-fortime substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
This study investigated time-use of elementary music teachers and elementary classroom teachers to determine: (1) whether there was a relationship between grade level, time of day, and day of the week and teachers' time-use in teaching, monitoring, and non-curricular, and (2) whether ethnicity, training, and years of experience affect teacher time-use. Sixty-nine music teachers and 55 classroom teachers participated. A MANOVA was used to examine the hypothesized relationship. ANOVA results were significant for time spent teaching, monitoring, and non-curricular. An independent t test revealed a significance difference (t (302) = 5.20, p Analyses of the activities subsumed under the major categories indicated significant differences between elementary music teachers and elementary classroom teachers, overall, in subject matter ( p teachers was higher than time-use for those who were Hispanic and white non-Hispanic. Analyses of time-use by grade showed no increase for either group as grade level increased. A statistically significant Wilks Lambda ( F (1,294) = .917 p < .013 ) was found for the independent variable day of the week. ANOVA indicated that elementary classroom teachers monitored more on Thursdays and Fridays: music teachers allocated more time to non-curricular activities on Fridays.
Resumo:
The Last Interglacial (LIG, 129-116 thousand of years BP, ka) represents a test bed for climate model feedbacks in warmer-than-present high latitude regions. However, mainly because aligning different palaeoclimatic archives and from different parts of the world is not trivial, a spatio-temporal picture of LIG temperature changes is difficult to obtain. Here, we have selected 47 polar ice core and sub-polar marine sediment records and developed a strategy to align them onto the recent AICC2012 ice core chronology. We provide the first compilation of high-latitude temperature changes across the LIG associated with a coherent temporal framework built between ice core and marine sediment records. Our new data synthesis highlights non-synchronous maximum temperature changes between the two hemispheres with the Southern Ocean and Antarctica records showing an early warming compared to North Atlantic records. We also observe warmer than present-day conditions that occur for a longer time period in southern high latitudes than in northern high latitudes. Finally, the amplitude of temperature changes at high northern latitudes is larger compared to high southern latitude temperature changes recorded at the onset and the demise of the LIG. We have also compiled four data-based time slices with temperature anomalies (compared to present-day conditions) at 115 ka, 120 ka, 125 ka and 130 ka and quantitatively estimated temperature uncertainties that include relative dating errors. This provides an improved benchmark for performing more robust model-data comparison. The surface temperature simulated by two General Circulation Models (CCSM3 and HadCM3) for 130 ka and 125 ka is compared to the corresponding time slice data synthesis. This comparison shows that the models predict warmer than present conditions earlier than documented in the North Atlantic, while neither model is able to produce the reconstructed early Southern Ocean and Antarctic warming. Our results highlight the importance of producing a sequence of time slices rather than one single time slice averaging the LIG climate conditions.
Resumo:
The South Pacific is a sensitive location for the variability of the global oceanic thermohaline circulation given that deep waters from the Atlantic Ocean, the Southern Ocean, and the Pacific Basin are exchanged. Here we reconstruct the deep water circulation of the central South Pacific for the last two glacial cycles (from 240,000 years ago to the Holocene) based on radiogenic neodymium (Nd) and lead (Pb) isotope records complemented by benthic stable carbon data obtained from two sediment cores located on the flanks of the East Pacific Rise. The records show small but consistent glacial/interglacial changes in all three isotopic systems with interglacial average values of -5.8 and 18.757 for epsilon Nd and 206Pb/204Pb, respectively, whereas glacial averages are -5.3 and 18.744. Comparison of this variability of Circumpolar Deep Water (CDW) to previously published records along the pathway of the global thermohaline circulation is consistent with reduced admixture of North Atlantic Deep Water to CDW during cold stages. The absolute values and amplitudes of the benthic delta13C variations are essentially indistinguishable from other records of the Southern Hemisphere and confirm that the low central South Pacific sedimentation rates did not result in a significant reduction of the amplitude of any of the measured proxies. In addition, the combined detrital Nd and strontium (87Sr/86Sr) isotope signatures imply that Australian and New Zealand dust has remained the principal contributor of lithogenic material to the central South Pacific.
Resumo:
This study contributes to the literature on gravity analysis by explicitly incorporating both most favored nation (MFN) rates and regional trade agreement (RTA) rates. Our gravity equation considers the fact that all exporters do not necessarily utilize RTA schemes, even when exporting to their RTA partners. We apply the tariff line–level data on worldwide trade to this gravity equation. As a result, we find a significantly negative coefficient for the (log) ratio of RTA rates to MFN rates. From the quantitative point of view, we show that in the first year of the Japan–Australia Economic Partnership (i.e., 2015), exports from Australia to Japan are expected to increase by 6% compared with the exports in 2014. Furthermore, it is shown that, based on the subsequent reduction in RTA rates, the magnitude of the trade-creation effect through tariff reductions gradually rises over time.
Resumo:
Oxygen and carbon isotope measurements were carried out on tests of planktic foraminifers N. pachyderma (sin.) from eight sediment cores taken from the eastern Arctic Ocean, the Fram Strait, and the lceland Sea, in order to reconstruct Arctic Ocean and Norwegian-Greenland Sea circulation patterns and ice covers during the last 130,000 years. In addition, the influence of ice, temperature and salinity effects on the isotopic signal was quantified. Isotope measurements on foraminifers from sediment surface samples were used to elucidate the ecology of N. pachyderma (sin.). Changes in the oxygen and carbon isotope composition of N. pachyderma (sin.) from sediment surface samples document the horizontal and vertical changes of water mass boundaries controlled by water temperature and salinity, because N. pachyderma (sin.) shows drastic changes in depth habitats, depending on the water mass properties. It was able to be shown that in the investigated areas a regional and spatial apparent increase of the ice effect occurred. This happened especially during the termination I by direct advection of meltwaters from nearby continents or during the termination and in interglacials by supply of isotopically light water from rivers. A northwardly proceeding overprint of the 'global' ice effect, increasing from the Norwegian-Greenland Sea to the Arctic Ocean, was not able to be demonstrated. By means of a model the influence of temperature and salinity on the global ice volume signal during the last 130,000 years was recorded. In combination with the results of this study, the model was the basis for a reconstruction of the paleoceanographic development of the Arctic Ocean and the Norwegian-Greenland Sea during this time interval. The conception of a relatively thick and permanent sea ice cover in the Nordic Seas during glacial times should be replaced by the model of a seasonally and regionally highly variable ice cover. Only during isotope stage 5e may there have been a local deep water formation in the Fram Strait.
Resumo:
The inorganic silicate fraction extracted from bulk pelagic sediments from the North Pacific Ocean is eolian dust. It monitors the composition of continental crust exposed to erosion in Asia. 176Lu/177Hf ratios of modern dust are subchondritic between 0.011 and 0.016 but slightly elevated with respect to immature sediments. Modern dust samples display a large range in Hf isotopic composition (IC), -4.70 < epsilon-Hf < +16.45, which encompasses that observed for the time series of DSDP cores 885/886 and piston core LL44-GPC3 extending back to the late Cretaceous. Hafnium and neodymium isotopic results are consistent with a dominantly binary mixture of dust contributed from island arc volcanic material and dust from central Asia. The Hf-Nd isotopic correlation for all modern dust samples, epsilon-Hf= =0.78 epsilon-Nd = +5.66 (n =22, R**2 =0.79), is flatter than those reported so far for terrestrial reservoirs. Moreover, the variability in epsilon-Hf of Asian dust exceeds that predicted on the basis of corresponding epsilon-Nd values (34.76 epsilon-Hf < +2.5; -10.96< epsilon-Nd <-10.1). This is attributed to: (1) the fixing of an important unradiogenic fraction of Hf in zircons, balanced by radiogenic Hf that is mobile in the erosional cycle, (2) the elevated Lu/Hf ratio in chemical sediments which, given time, results in a Hf signature that is radiogenic compared with Hf expected from its corresponding Nd isotopic components, and (3) the possibility that diagenetic resetting of marine sediments may incorporate a significant radiogenic Hf component into diagenetically grown minerals such as illite. Together, these processes may explain the variability and more radiogenic character of Hf isotopes when compared to the Nd isotopic signatures of Asian dust. The Hf-Nd isotope time series of eolian dust are consistent with the results of modern dust except two samples that have extremely radiogenic Hf for their Nd (epsilon-Hf =+8.6 and +10.3, epsilon-Nd =39.5 and 39.8). These data may point to a source contribution of dust unresolved by Nd and Pb isotopes. The Hf IC of eolian dust input to the oceans may be more variable and more radiogenic than previously anticipated. The Hf signature of Pacific seawater, however, has varied little over the past 20 Myr, especially across the drastic increase of eolian dust flux from Asia around 3.5 Ma. Therefore, continental contributions to seawater Hf appear to be riverine rather than eolian. Current predictions regarding the relative proportions of source components to seawater Hf must account for the presence of a variable and radiogenic continental component. Data on the IC and flux of river-dissolved Hf to the oceans are urgently required to better estimate contributions to seawater Hf. This then would permit the use of Hf isotopes as a monitor of past changes in erosion.