917 resultados para Sum of logistics
Resumo:
Motivated by a recent claim by Muller et al (2010 Nature 463 926-9) that an atom interferometer can serve as an atom clock to measure the gravitational redshift with an unprecedented accuracy, we provide a representation-free description of the Kasevich-Chu interferometer based on operator algebra. We use this framework to show that the operator product determining the number of atoms at the exit ports of the interferometer is a c-number phase factor whose phase is the sum of only two phases: one is due to the acceleration of the phases of the laser pulses and the other one is due to the acceleration of the atom. This formulation brings out most clearly that this interferometer is an accelerometer or a gravimeter. Moreover, we point out that in different representations of quantum mechanics such as the position or the momentum representation the phase shift appears as though it originates from different physical phenomena. Due to this representation dependence conclusions concerning an enhanced accuracy derived in a specific representation are unfounded.
Resumo:
Some of the biggest challenges for intermodal transport competitiveness are the extra handling costs and pre- and post-haulage costs. This paper investigates the use of Intermodal High Capacity Transport (IHCT) for the intermodal transport chain in general and to pre-and post-haulage in particular. The aim is not only to measure the cost reductions from using larger vehicles but to understand how better management of inbound flows through increased integration of logistics processes can increase the efficiency of the last mile. The paper analyses the haulage of two 40 foot containers simultaneously when part of an intermodal transport chain. Data were collected from a demonstration project in Sweden, where permission was obtained to use longer vehicles on an approved route to and from the nearest intermodal terminal. Results indicate substantial cost savings from using longer vehicles for pre- and post-haulage. In addition, the business model whereby the shipper purchased their own chassis and permission was obtained to access the terminal after hours for collecting pre-loaded chassis brought additional cost and planning benefits. The total cost saving was significant and potentially eliminates the cost deficit associated with the last mile.
Resumo:
Life Cycle Climate Performance (LCCP) is an evaluation method by which heating, ventilation, air conditioning and refrigeration systems can be evaluated for their global warming impact over the course of their complete life cycle. LCCP is more inclusive than previous metrics such as Total Equivalent Warming Impact. It is calculated as the sum of direct and indirect emissions generated over the lifetime of the system “from cradle to grave”. Direct emissions include all effects from the release of refrigerants into the atmosphere during the lifetime of the system. This includes annual leakage and losses during the disposal of the unit. The indirect emissions include emissions from the energy consumption during manufacturing process, lifetime operation, and disposal of the system. This thesis proposes a standardized approach to the use of LCCP and traceable data sources for all aspects of the calculation. An equation is proposed that unifies the efforts of previous researchers. Data sources are recommended for average values for all LCCP inputs. A residential heat pump sample problem is presented illustrating the methodology. The heat pump is evaluated at five U.S. locations in different climate zones. An excel tool was developed for residential heat pumps using the proposed method. The primary factor in the LCCP calculation is the energy consumption of the system. The effects of advanced vapor compression cycles are then investigated for heat pump applications. Advanced cycle options attempt to reduce the energy consumption in various ways. There are three categories of advanced cycle options: subcooling cycles, expansion loss recovery cycles and multi-stage cycles. The cycles selected for research are the suction line heat exchanger cycle, the expander cycle, the ejector cycle, and the vapor injection cycle. The cycles are modeled using Engineering Equation Solver and the results are applied to the LCCP methodology. The expander cycle, ejector cycle and vapor injection cycle are effective in reducing LCCP of a residential heat pump by 5.6%, 8.2% and 10.5%, respectively in Phoenix, AZ. The advanced cycles are evaluated with the use of low GWP refrigerants and are capable of reducing the LCCP of a residential heat by 13.7%, 16.3% and 18.6% using a refrigerant with a GWP of 10. To meet the U.S. Department of Energy’s goal of reducing residential energy use by 40% by 2025 with a proportional reduction in all other categories of residential energy consumption, a reduction in the energy consumption of a residential heat pump of 34.8% with a refrigerant GWP of 10 for Phoenix, AZ is necessary. A combination of advanced cycle, control options and low GWP refrigerants are necessary to meet this goal.
Resumo:
The time-mean Argo float displacements and the World Ocean Atlas 2009 temperature–salinity climatology are used to obtain the total, top to bottom, mass transports. Outside of an equatorial band, the total transports are the sum of the vertical integrals of geostrophic- and wind-driven Ekman currents. However, these transports are generally divergent, and to obtain a mass conserving circulation, a Poisson equation is solved for the streamfunction with Dirichlet boundary conditions at solid boundaries. The value of the streamfunction on islands is also part of the unknowns. This study presents and discusses an energetic circulation in three basins: the North Atlantic, the North Pacific, and the Southern Ocean. This global method leads to new estimations of the time-mean western Eulerian boundary current transports maxima of 97 Sverdrups (Sv; 1 Sv ≡ 106 m3 s−1) at 60°W for the Gulf Stream, 84 Sv at 157°E for the Kuroshio, 80 Sv for the Agulhas Current between 32° and 36°S, and finally 175 Sv for the Antarctic Circumpolar Current at Drake Passage. Although the large-scale structure and boundary of the interior gyres is well predicted by the Sverdrup relation, the transports derived from the wind stress curl are lower than the observed transports in the interior by roughly a factor of 2, suggesting an important contribution of the bottom torques. With additional Argo displacement data, the errors caused by the presence of remaining transient terms at the 1000-db reference level will continue to decrease, allowing this method to produce increasingly accurate results in the future.
Resumo:
Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a model’s variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.
Resumo:
The energy of a symmetric matrix is the sum of the absolute values of its eigenvalues. We introduce a lower bound for the energy of a symmetric partitioned matrix into blocks. This bound is related to the spectrum of its quotient matrix. Furthermore, we study necessary conditions for the equality. Applications to the energy of the generalized composition of a family of arbitrary graphs are obtained. A lower bound for the energy of a graph with a bridge is given. Some computational experiments are presented in order to show that, in some cases, the obtained lower bound is incomparable with the well known lower bound $2\sqrt{m}$, where $m$ is the number of edges of the graph.
Resumo:
In this paper agricultural waste; Canarium schweinfurthii was explored for the sequestering of Fe and Pb ions from wastewater solution after carbonization and chemical treatment at 400oC. Optimum time of 30 and 150 min with percentage removal of 95 and 98% at optimum pH of 2 and 6 was obtained for Fe and Pb ions. Kinetics model followed pseudofirst order as sum of absolute error (EABS) between Qe and Qc greater than that of pseudo second order. Parameters evaluated from isothermal equation (Freundlich and Langmuir) showed that KL and QO for Fe > Pb and R2 for Langmuir> Freundlich. The study reveals the suitability of the adsorbent for sequestering of Fe and Pb ions from industrial wastewater.
Resumo:
DnaD is a primosomal protein that remodels supercoiled plasmids. It binds to supercoiled forms and converts them to open forms without nicking. During this remodeling process, all the writhe is converted to twist and the plasmids are held around the periphery of large scaffolds made up of DnaD molecules. This DNA-remodeling function is the sum of a scaffold-forming activity on the N-terminal domain and a DNA-dependent oligomerization activity on the C-terminal domain. We have determined the crystal structure of the scaffold-forming N-terminal domain, which reveals a winged-helix architecture, with additional structural elements extending from both N- and C-termini. Four monomers form dimers that join into a tetramer. The N-terminal extension mediates dimerization and tetramerization, with extensive interactions and distinct interfaces. The wings and helices of the winged-helix domains remain exposed on the surface of the tetramer. Structure-guided mutagenesis and atomic force microscopy imaging indicate that these elements, together with the C-terminal extension, are involved in scaffold formation. Based upon our data, we propose a model for the DnaD-mediated scaffold formation.
Resumo:
Apparent digestibility coefficients (ADC) of dry matter, crude protein (CP), and amino acids (AA) were evaluated in diets with six rendered by-products used to feed juvenile Pacific white shrimp: two poultry meals (poultry meal 1, 69% CP; poultry meal 2, 72% CP), two feather meals (89% CP), one blood meal (96% CP), and one pork meal (57% CP). Experimental diets were formulated with 30% of the test ingredient and 70% of a commercial diet supplemented with 1% of chromium oxide as inert marker. AA contents in ingredients, diets, leached diets, and feces were determined by high performance liquid chromatography. Preprandial AA losses attributed to leaching were higher in the blood meal diet (15%) and pork meal diet (10%). Poultry meal diets 1 and 2 showed mean AA losses of 3% and 5%, respectively, while the reference diet had a mean AA leaching of 6%. The AA that had the highest leaching rates were lysine (21%), methionine (15%), and histidine (12%). The ADC of dry matter was higher for poultry meals 1 (70%) and 2 (73%), followed by pork meal (69%), feather meals (61%), and blood meal (57%). The digestibility of CP was higher for poultry meals (78–80%), followed by pork meal (76%), and blood meal and feather meals (65–67%). The digestibility of CP in the reference diet (83%) was higher than that observed for all the animal by-product meals except the poultry meals. The ADC of the sum of AA adjusted for nutrient leaching fluctuated from 65% for blood meal to 80% for poultry meals.
Resumo:
Aiming to obtain empirical models for the estimation of Syrah leaf area a set of 210 fruiting shoots was randomly collected during the 2013 growing season in an adult experimental vineyard, located in Lisbon, Portugal. Samples of 30 fruiting shoots were taken periodically from the stage of inflorescences visible to veraison (7 sampling dates). At the lab, from each shoot, primary and lateral leaves were separated and numbered according to node insertion. For each leaf, the length of the central and lateral veins was recorded and then the leaf area was measured by a leaf area meter. For single leaf area estimation the best statistical models uses as explanatory variable the sum of the lengths of the two lateral leaf veins. For the estimation of leaf area per shoot it was followed the approach of Lopes & Pinto (2005), based on 3 explanatory variables: number of primary leaves and area of the largest and smallest leaves. The best statistical model for estimation of primary leaf area per shoot uses a calculated variable obtained from the average of the largest and smallest primary leaf area multiplied by the number of primary leaves. For lateral leaf area estimation another model using the same type of calculated variable is also presented. All models explain a very high proportion of variability in leaf area. Our results confirm the already reported strong importance of the three measured variables (number of leaves and area of the largest and smallest leaf) as predictors of the shoot leaf area. The proposed models can be used to accurately predict Syrah primary and secondary leaf area per shoot in any phase of the growing cycle. They are inexpensive, practical, non-destructive methods which do not require specialized staff or expensive equipment.
Resumo:
Obnoxious single facility location models are models that have the aim to find the best location for an undesired facility. Undesired is usually expressed in relation to the so-called demand points that represent locations hindered by the facility. Because obnoxious facility location models as a rule are multimodal, the standard techniques of convex analysis used for locating desirable facilities in the plane may be trapped in local optima instead of the desired global optimum. It is assumed that having more optima coincides with being harder to solve. In this thesis the multimodality of obnoxious single facility location models is investigated in order to know which models are challenging problems in facility location problems and which are suitable for site selection. Selected for this are the obnoxious facility models that appear to be most important in literature. These are the maximin model, that maximizes the minimum distance from demand point to the obnoxious facility, the maxisum model, that maximizes the sum of distance from the demand points to the facility and the minisum model, that minimizes the sum of damage of the facility to the demand points. All models are measured with the Euclidean distances and some models also with the rectilinear distance metric. Furthermore a suitable algorithm is selected for testing multimodality. Of the tested algorithms in this thesis, Multistart is most appropriate. A small numerical experiment shows that Maximin models have on average the most optima, of which the model locating an obnoxious linesegment has the most. Maximin models have few optima and are thus not very hard to solve. From the Minisum models, the models that have the most optima are models that take wind into account. In general can be said that the generic models have less optima than the weighted versions. Models that are measured with the rectilinear norm do have more solutions than the same models measured with the Euclidean norm. This can be explained for the maximin models in the numerical example because the shape of the norm coincides with a bound of the feasible area, so not all solutions are different optima. The difference found in number of optima of the Maxisum and Minisum can not be explained by this phenomenon.
Resumo:
Nowadays, the development of the photovoltaic (PV) technology is consolidated as a source of renewable energy. The research in the topic of maximum improvement on the energy efficiency of the PV plants is today a major challenge. The main requirement for this purpose is to know the performance of each of the PV modules that integrate the PV field in real time. In this respect, a PLC communications based Smart Monitoring and Communications Module, which is able to monitor at PV level their operating parameters, has been developed at the University of Malaga. With this device you can check if any of the panels is suffering any type of overriding performance, due to a malfunction or partial shadowing of its surface. Since these fluctuations in electricity production from a single panel affect the overall sum of all panels that conform a string, it is necessary to isolate the problem and modify the routes of energy through alternative paths in case of PV panels array configuration.
Resumo:
An accurate and easy method for the extraction, cleanup, and HRGC-HRMS analysis of dioxin-like PCBs (DL-PCBs) in low-volume serum samples (1 mL) was developed. Serum samples were extracted several times using n-hexane and purified by acid washing. Recovery rates of labeled congeners ranged from 70 to 110 % and the limits of detection were below 1 pg/g on lipid basis. Although human studies are limited and contradictory, several studies have shown that DL-PCBs can have adverse effects on the male reproductive system. In this way, the present method was applied to 21 serum samples of male patients attending fertility clinics. The total levels obtained for the patients ranged from 6.90 to 84.1 pg WHO-TEQ/g lipid, with a mean value of 20.3 pg WHO-TEQ/g lipid. The predominant PCBs (the sum of PCB 118, 156, and 105) contributed 67 % to the mean concentration of total DL-PCBs in the samples analyzed.
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.