863 resultados para multi-stage fixed costs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new set of primitive extraterrestrial materials collected in the Earth's stratosphere include Chondritic Porous Aggregates (CPA's) [1]. CPAs have a complex and variable mineralogy [1-3] that include 'organic compounds' [4,5] and poorly graphitised carbon (PGC)[6]. This study presents a continuation of our detailed Analytical Electron Microscope study on carbon-rich CPA W7029*A from the JSC Cosmic Dust Collection. This CPA is an uncontaminated sample that survived atmospheric entry without appreciable alteration [7] and which contains ~44% carbonaceous material. The carbonaceous composition of selected particles was confirmed by Electron Energy Loss Spectroscopy and Selected Area Electron Diffraction (SAED). Possible carbonaceous contaminants introduced by specimen preparation techniques are easily recognised from indigenous CPA carbon particles [8] and do not bias our interpretations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to establish the influence of the drying air characteristics on the drying performance and fluidization quality of bovine intestine for pet food, several drying tests have been carried out in a laboratory scale heat pump assisted fluid bed dryer. Bovine intestine samples were heat pump fluidized bed dried at atmospheric pressure and at temperatures below and above the materials freezing points, equipped with a continuous monitoring system. The investigation of the drying characteristics have been conducted in the temperature range −10 to 25 ◦C and the airflow in the range 1.5–2.5 m/s. Some experiments were conducted as single temperature drying experiments and others as two stage drying experiments employing two temperatures. An Arrhenius-type equation was used to interpret the influence of the drying air temperature on the effective diffusivity, calculated with the method of slopes in terms of energy activation, and this was found to be sensitive to the temperature. The effective diffusion coefficient of moisture transfer was determined by the Fickian method using uni-dimensional moisture movement in both moisture, removal by evaporation and combined sublimation and evaporation. Correlations expressing the effective moisture diffusivity and drying temperature are reported. Bovine particles were characterized according to the Geldart classification and the minimum fluidization velocity was calculated using the Ergun Equation and generalized equation for all drying conditions at the beginning and end of the trials. Walli’s model was used to categorize stability of the fluidization at the beginning and end of the dryingv for each trial. The determined Walli’s values were positive at the beginning and end of all trials indicating stable fluidization at the beginning and end for each drying condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study proposes that technology adoption be considered as a multi-stage process constituting several distinct stages. Using the Theory of Planned Behaviour (TPB), Ettlie’s adoption stages and by employing data gathered from 162 owners of Small and Medium-sized Enterprises (SMEs), our findings show that the determinants of the intention to adopt packaged software fluctuate significantly across adoption stages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new multi-resource multi-stage mine production timetabling problem for optimising the open-pit drilling, blasting and excavating operations under equipment capacity constraints. The flow process is analysed based on the real-life data from an Australian iron ore mine site. The objective of the model is to maximise the throughput and minimise the total idle times of equipment at each stage. The following comprehensive mining attributes and constraints are considered: types of equipment; operating capacities of equipment; ready times of equipment; speeds of equipment; block-sequence-dependent movement times; equipment-assignment-dependent operational times; etc. The model also provides the availability and usage of equipment units at multiple operational stages such as drilling, blasting and excavating stages. The problem is formulated by mixed integer programming and solved by ILOG-CPLEX optimiser. The proposed model is validated with extensive computational experiments to improve mine production efficiency at the operational level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new multi-stage mine production timetabling (MMPT) model to optimise open-pit mine production operations including drilling, blasting and excavating under real-time mining constraints. The MMPT problem is formulated as a mixed integer programming model and can be optimally solved for small-size MMPT instances by IBM ILOG-CPLEX. Due to NP-hardness, an improved shifting-bottleneck-procedure algorithm based on the extended disjunctive graph is developed to solve large-size MMPT instances in an effective and efficient way. Extensive computational experiments are presented to validate the proposed algorithm that is able to efficiently obtain the near-optimal operational timetable of mining equipment units. The advantages are indicated by sensitivity analysis under various real-life scenarios. The proposed MMPT methodology is promising to be implemented as a tool for mining industry because it is straightforwardly modelled as a standard scheduling model, efficiently solved by the heuristic algorithm, and flexibly expanded by adopting additional industrial constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.