993 resultados para Optimal Component Proportions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixture experiments are typical for chemical, food, metallurgical and other industries. The aim of these experiments is to find optimal component proportions that provide desired values of some product performance characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High performance materials are needed for the reconstruction of such a singular building as a cathedral, since in addition to special mechanical properties, high self compact ability, high durability and high surface quality, are specified. Because of the project’s specifications, the use of polypropylene fiber-reinforced, self-compacting concrete was selected by the engineering office. The low quality of local materials and the lack of experience in applying macro polypropylene fiber for structural reinforcement with these components materials required the development of a pretesting program. To optimize the mix design, performance was evaluated following technical, economical and constructability criteria. Since the addition of fibers reduces concrete self-compactability, many trials were run to determine the optimal mix proportions. The variables introduced were paste volume; the aggregate skeleton of two or three fractions plus limestone filler; fiber type and dosage. Two mix designs were selected from the preliminary results. The first one was used as reference for self-compactability and mechanical properties. The second one was an optimized mix with a reduction in cement content of 20 kg/m3and fiber dosage of 1 kg/m3. For these mix designs, extended testing was carried out to measure the compression and flexural strength, modulus of elasticity, toughness, and water permeability resistance

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Carbon Capture and Storage (CCS) technologies provide a means to significantly reduce carbon emissions from the existing fleet of fossil-fired plants, and hence can facilitate a gradual transition from conventional to more sustainable sources of electric power. This is especially relevant for coal plants that have a CO2 emission rate that is roughly two times higher than that of natural gas plants. Of the different kinds of CCS technology available, post-combustion amine based CCS is the best developed and hence more suitable for retrofitting an existing coal plant. The high costs from operating CCS could be reduced by enabling flexible operation through amine storage or allowing partial capture of CO2 during high electricity prices. This flexibility is also found to improve the power plant’s ramp capability, enabling it to offset the intermittency of renewable power sources. This thesis proposes a solution to problems associated with two promising technologies for decarbonizing the electric power system: the high costs of the energy penalty of CCS, and the intermittency and non-dispatchability of wind power. It explores the economic and technical feasibility of a hybrid system consisting of a coal plant retrofitted with a post-combustion-amine based CCS system equipped with the option to perform partial capture or amine storage, and a co-located wind farm. A techno-economic assessment of the performance of the hybrid system is carried out both from the perspective of the stakeholders (utility owners, investors, etc.) as well as that of the power system operator.

In order to perform the assessment from the perspective of the facility owners (e.g., electric power utilities, independent power producers), an optimal design and operating strategy of the hybrid system is determined for both the amine storage and partial capture configurations. A linear optimization model is developed to determine the optimal component sizes for the hybrid system and capture rates while meeting constraints on annual average emission targets of CO2, and variability of the combined power output. Results indicate that there are economic benefits of flexible operation relative to conventional CCS, and demonstrate that the hybrid system could operate as an energy storage system: providing an effective pathway for wind power integration as well as a mechanism to mute the variability of intermittent wind power.

In order to assess the performance of the hybrid system from the perspective of the system operator, a modified Unit Commitment/ Economic Dispatch model is built to consider and represent the techno-economic aspects of operation of the hybrid system within a power grid. The hybrid system is found to be effective in helping the power system meet an average CO2 emissions limit equivalent to the CO2 emission rate of a state-of-the-art natural gas plant, and to reduce power system operation costs and number of instances and magnitude of energy and reserve scarcity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real structures can be thought as an assembly of components, as for instances plates, shells and beams. This later type of component is very commonly found in structures like frames which can involve a significant degree of complexity or as a reinforcement element of plates or shells. To obtain the desired mechanical behavior of these components or to improve their operating conditions when rehabilitating structures, one of the eventual parameters to consider for that purpose, when possible, is the location of the supports. In the present work, a beam-type structure is considered, and for a set of cases concerning different number and types of supports, as well as different load cases, the authors optimize the location of the supports in order to obtain minimum values of the maximum transverse deflection. The optimization processes are carried out using genetic algorithms. The results obtained, clearly show a good performance of the approach proposed. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper discusses the utilization of new techniques ot select processes for protein recovery, separation and purification. It describesa rational approach that uses fundamental databases of proteins molecules to simplify the complex problem of choosing high resolution separation methods for multi component mixtures. It examines the role of modern computer techniques to help solving these questions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A non-controlled longitudinal study was conducted to evaluate the combined vaccine against measles, mumps and rubella (MMR) immunogenicity in 150 children vaccinated in the routine of three health units in the city of Rio de Janeiro, Brazil, 2008-2009, without other vaccines administered during the period from 30 days before to 30 days after vaccination. A previous study conducted in Brazil in 2007, in 1,769 children ranging from 12-15 months of age vaccinated against yellow fever and MMR simultaneously or at intervals of 30 days or more between doses, had shown low seroconversion for mumps regardless of the interval between administration of the two vaccines. The current study showed 89.5% (95% confidence interval: 83.3; 94.0) seroconversion rate for mumps. All children seroconverted for measles and rubella. After revaccination, high antibody titres and seroconversion rates were achieved against mumps. The results of this study and others suggest that two MMR doses confer optimal immunoresponses for all three antigens and the possible need for additional doses should be studied taking into account not only serological, but also epidemiological data, as there is no serological correlate of protection for mumps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correct positioning of the tibial component in total knee arthroplasty (TKA) must take into account both an optimal bone coverage (defined by a maximal cortical bearing with posteromedial and anterolateral support) and satisfactory patellofemoral tracking. Consequently, a compromise position must be found by the surgeon during the operation to simultaneously meet these two requirements. Moreover, tibial tray positioning depends upon the tibial torsion, which has been shown to act mainly in the proximal quarter of the tibia. Therefore, the correct application of the tibial tray is also theoretically related to the level of bone resection. In this study, we first quantified the torsional profile given by an optimal bone coverage for a symmetrical tibial tray design and for an asymmetrical one. Then, for the two types of tibial trays, we measured the angle difference between optimal bone coverage and an alignment on the middle of the tibial tubercule. Results showed that the values of the torsional profile given by the symmetrical tray were more scattered than those from the asymmetrical one. However, determination of the mean differential angle between the position providing optimal bone coverage and the one providing the best patellofemoral tracking indicated that the symmetrical prosthetic tray offered the best compromise between these two requirements. Although the tibiofemoral joint is known to be asymmetric in both shape and dimension, the asymmetrical tray chosen in this study was found to fulfill this compromise with more difficulty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Performing total knee replacement, accurate alignment and neutral rotation of the femoral component are widely believed to be crucial for the ultimate success. Contrary to absolute bone referenced alignment, using a ligament balancing technique does not automatically rotate the femoral component parallel to the transepicondylar axis. In this context we established the hypothesis that rotational alignment of the femoral component parallel to the transepicondylar axis (0° ± 3°) results in better outcome than alignment outside of this range. METHODS: We analysed 204 primary cemented mobile bearing total knee replacements five years postoperatively. Femoral component rotation was measured on axial radiographs using the condylar twist angle (CTA). Knee society score, range of motion as well as subjective rating documented outcome. RESULTS: In 96 knees the femoral component rotation was within the range 0 ± 3° (neutral rotation group), and in 108 knees the five-year postoperative rotational alignment of the femoral component was outside of this range (outlier group). Postoperative CTA showed a mean of 2.8° (±3.4°) internal rotation (IR) with a range between 6° external rotation (ER) and 15° IR (CI 95). No difference with regard to subjective and objective outcome could be detected. CONCLUSION: The present work shows that there is a large given natural variability in optimal rotational orientation, in this study between 6° ER and 15° IR, with numerous co-factors determining correct positioning of the femoral component. Further studies substantiating pre- and postoperative determinants are required to complete the understanding of resulting biomechanics in primary TKA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of autonomous sensors powered by small-size photovoltaic (PV) panels, this work analyses how the efficiency of DC/DC-converter-based power processing circuits can be improved by an appropriate selection of the inductor current that transfers the energy from the PV panel to a storage unit. Each component of power losses (fixed, conduction and switching losses) involved in the DC/DC converter specifically depends on the average inductor current so that there is an optimal value of this current that causes minimal losses and, hence, maximum efficiency. Such an idea has been tested experimentally using two commercial DC/DC converters whose average inductor current is adjustable. Experimental results show that the efficiency can be improved up to 12% by selecting an optimal value of that current, which is around 300-350 mA for such DC/DC converters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oxygen therapy is essential for the treatment of some neonatal critical care conditions but its extrapulmonary effects have not been adequately investigated. We therefore studied the effects of various oxygen concentrations on intestinal epithelial cell function. In order to assess the effects of hyperoxia on the intestinal immunological barrier, we studied two physiological changes in neonatal rats exposed to hyperoxia: the change in intestinal IgA secretory component (SC, an important component of SIgA) and changes in intestinal epithelial cells. Immunohistochemistry and Western blot were used to detect changes in the intestinal tissue SC of neonatal rats. To detect intestinal epithelial cell growth, cells were counted, and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and Giemsa staining were used to assess cell survival. Immunohistochemistry was used to determine SC expression. The expression of intestinal SC in neonatal rats under hyperoxic conditions was notably increased compared with rats inhaling room air (P < 0.01). In vitro, 40% O2 was beneficial for cell growth. However, 60% O2 and 90% O2 induced rapid cell death. Also, 40% O2 induced expression of SC by intestinal epithelial cells, whereas 60% O2did not; however, 90% O2 limited the ability of intestinal epithelial cells to express SC. In vivo and in vitro, moderate hyperoxia brought about increases in intestinal SC. This would be expected to bring about an increase in intestinal SIgA. High levels of SC and SIgA would serve to benefit hyperoxia-exposed individuals by helping to maintain optimal conditions in the intestinal tract.