907 resultados para Unconstrained minimization
Resumo:
The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.
Resumo:
With proper application of Best Management Practices (BMPs), the impact from the sediment to the water bodies could be minimized. However, finding the optimal allocation of BMP can be difficult, since there are numerous possible options. Also, economics plays an important role in BMP affordability and, therefore, the number of BMPs able to be placed in a given budget year. In this study, two methodologies are presented to determine the optimal cost-effective BMP allocation, by coupling a watershed-level model, Soil and Water Assessment Tool (SWAT), with two different methods, targeting and a multi-objective genetic algorithm (Non-dominated Sorting Genetic Algorithm II, NSGA-II). For demonstration, these two methodologies were applied to an agriculture-dominant watershed located in Lower Michigan to find the optimal allocation of filter strips and grassed waterways. For targeting, three different criteria were investigated for sediment yield minimization, during the process of which it was found that the grassed waterways near the watershed outlet reduced the watershed outlet sediment yield the most under this study condition, and cost minimization was also included as a second objective during the cost-effective BMP allocation selection. NSGA-II was used to find the optimal BMP allocation for both sediment yield reduction and cost minimization. By comparing the results and computational time of both methodologies, targeting was determined to be a better method for finding optimal cost-effective BMP allocation under this study condition, since it provided more than 13 times the amount of solutions with better fitness for the objective functions while using less than one eighth of the SWAT computational time than the NSGA-II with 150 generations did.
Resumo:
This thesis studies the minimization of the fuel consumption for a Hybrid Electric Vehicle (HEV) using Model Predictive Control (MPC). The presented MPC – based controller calculates an optimal sequence of control inputs to a hybrid vehicle using the measured plant outputs, the current dynamic states, a system model, system constraints, and an optimization cost function. The MPC controller is developed using Matlab MPC control toolbox. To evaluate the performance of the presented controller, a power-split hybrid vehicle, 2004 Toyota Prius, is selected. The vehicle uses a planetary gear set to combine three power components, an engine, a motor, and a generator, and transfer energy from these components to the vehicle wheels. The planetary gear model is developed based on the Willis’s formula. The dynamic models of the engine, the motor, and the generator, are derived based on their dynamics at the planetary gear. The MPC controller for HEV energy management is validated in the MATLAB/Simulink environment. Both the step response performance (a 0 – 60 mph step input) and the driving cycle tracking performance are evaluated. Two standard driving cycles, Urban Dynamometer Driving Schedule (UDDS) and Highway Fuel Economy Driving Schedule (HWFET), are used in the evaluation tests. For the UDDS and HWFET driving cycles, the simulation results, the fuel consumption and the battery state of charge, using the MPC controller are compared with the simulation results using the original vehicle model in Autonomie. The MPC approach shows the feasibility to improve vehicle performance and minimize fuel consumption.
Resumo:
PMR-15 polyimide is a polymer that is used as a matrix in composites. These composites with PMR-15 matrices are called advanced polymer matrix composite that is abundantly used in the aerospace and electronics industries because of its high temperature resistivity. Apart from having high temperature sustainability, PMR-15 composites also display good thermal-oxidative stability, mechanical properties, processability and low costs, which makes it a suitable material for manufacturing aircraft structures. PMR-15 uses the reverse Diels-Alder (RDA) method for crosslinking which provides it with the groundwork for its distinctive thermal stability and a range of 280-300 degree Centigrade use temperature. Regardless of such desirable properties, this material has a number of limitations that compromises its application on a large scale basis. PMR-15 composites has been known to be very vulnerable to micro-cracking at inter and intra-laminar cracking. But the major factor that hinders its demand is PMR-15's carcinogenic constituent, methylene dianilineme (MDA), also a liver toxin. The necessity of providing a safe working environment during its production adds up to the cost of this material. In this study, Molecular Dynamics and Energy Minimization techniques are utilized to simulate a structure of PMR-15 at a given density of 1.324 g/cc and an attempt to recreate the polyimide to reduce the number of experimental testing and hence subdue the health hazards as well as the cost involved in its production. Even though this study does not involve in validating any mechanical properties of the model, it could be used in future for the validation of its properties and further testing for different properties like aging, microcracking, creep etc.
Resumo:
When we actively explore the visual environment, our gaze preferentially selects regions characterized by high contrast and high density of edges, suggesting that the guidance of eye movements during visual exploration is driven to a significant degree by perceptual characteristics of a scene. Converging findings suggest that the selection of the visual target for the upcoming saccade critically depends on a covert shift of spatial attention. However, it is unclear whether attention selects the location of the next fixation uniquely on the basis of global scene structure or additionally on local perceptual information. To investigate the role of spatial attention in scene processing, we examined eye fixation patterns of patients with spatial neglect during unconstrained exploration of natural images and compared these to healthy and brain-injured control participants. We computed luminance, colour, contrast, and edge information contained in image patches surrounding each fixation and evaluated whether they differed from randomly selected image patches. At the global level, neglect patients showed the characteristic ipsilesional shift of the distribution of their fixations. At the local level, patients with neglect and control participants fixated image regions in ipsilesional space that were closely similar with respect to their local feature content. In contrast, when directing their gaze to contralesional (impaired) space neglect patients fixated regions of significantly higher local luminance and lower edge content than controls. These results suggest that intact spatial attention is necessary for the active sampling of local feature content during scene perception.
Resumo:
Background: Deterministic evolution, phylogenetic contingency and evolutionary chance each can influence patterns of morphological diversification during adaptive radiation. In comparative studies of replicate radiations, convergence in a common morphospace implicates determinism, whereas non-convergence suggests the importance of contingency or chance. Methodology/Principal Findings: The endemic cichlid fish assemblages of the three African great lakes have evolved similar sets of ecomorphs but show evidence of non-convergence when compared in a common morphospace, suggesting the importance of contingency and/or chance. We then analyzed the morphological diversity of each assemblage independently and compared their axes of diversification in the unconstrained global morphospace. We find that despite differences in phylogenetic composition, invasion history, and ecological setting, the three assemblages are diversifying along parallel axes through morphospace and have nearly identical variance-covariance structures among morphological elements. Conclusions/Significance: By demonstrating that replicate adaptive radiations are diverging along parallel axes, we have shown that non-convergence in the common morphospace is associated with convergence in the global morphospace. Applying these complimentary analyses to future comparative studies will improve our understanding of the relationship between morphological convergence and non-convergence, and the roles of contingency, chance and determinism in driving morphological diversification.
Resumo:
Pedicle hooks which are used as an anchorage for posterior spinal instrumentation may be subjected to considerable three-dimensional forces. In order to achieve stronger attachment to the implantation site, hooks using screws for additional fixation have been developed. The failure loads and mechanisms of three such devices have been experimentally determined on human thoracic vertebrae: the Universal Spine System (USS) pedicle hook with one screw, a prototype pedicle hook with two screws and the Cotrel-Dubousset (CD) pedicle hook with screw. The USS hooks use 3.2-mm self-tapping fixation screws which pass into the pedicle, whereas the CD hook is stabilised with a 3-mm set screw pressing against the superior part of the facet joint. A clinically established 5-mm pedicle screw was tested for comparison. A matched pair experimental design was implemented to evaluate these implants in constrained (series I) and rotationally unconstrained (series II) posterior pull-out tests. In the constrained tests the pedicle screw was the strongest implant, with an average pull-out force of 1650 N (SD 623 N). The prototype hook was comparable, with an average failure load of 1530 N (SD 414 N). The average pull-out force of the USS hook with one screw was 910 N (SD 243 N), not significantly different to the CD hook's average failure load of 740 N (SD 189 N). The result of the unconstrained tests were similar, with the prototype hook being the strongest device (average 1617 N, SD 652 N). However, in this series the difference in failure load between the USS hook with one screw and the CD hook was significant. Average failure loads of 792 N (SD 184 N) for the USS hook and 464 N (SD 279 N) for the CD hook were measured. A pedicular fracture in the plane of the fixation screw was the most common failure mode for USS hooks.(ABSTRACT TRUNCATED AT 250 WORDS)
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
Dem Bestandsmanagement wird in Unternehmen eine stetig steigende Bedeutung beigemessen. Die Möglichkeit, durch ein effizientes Bestandsmanagement Kosten zu reduzieren, ist für viele Unternehmen im Hinblick auf einen langfristigen Unternehmenserfolg wichtig. Im Fokus des Bestandsmanagements stehen oft schnelldrehende Materialien, die sich durch geringe Reichweiten und hohe Lagerumschläge auszeichnen. Das Potenzial eines systematischen Managements von langsamdrehenden Materialien wurde bisher noch nicht untersucht. Dieses Paper greift diese Thematik auf und liefert einen Beitrag zum Bestandsmanagement für langsamdrehende Materialien.
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.
Resumo:
Nonlinear computational analysis of materials showing elasto-plasticity or damage relies on knowledge of their yield behavior and strengths under complex stress states. In this work, a generalized anisotropic quadric yield criterion is proposed that is homogeneous of degree one and takes a convex quadric shape with a smooth transition from ellipsoidal to cylindrical or conical surfaces. If in the case of material identification, the shape of the yield function is not known a priori, a minimization using the quadric criterion will result in the optimal shape among the convex quadrics. The convexity limits of the criterion and the transition points between the different shapes are identified. Several special cases of the criterion for distinct material symmetries such as isotropy, cubic symmetry, fabric-based orthotropy and general orthotropy are presented and discussed. The generality of the formulation is demonstrated by showing its degeneration to several classical yield surfaces like the von Mises, Drucker–Prager, Tsai–Wu, Liu, generalized Hill and classical Hill criteria under appropriate conditions. Applicability of the formulation for micromechanical analyses was shown by transformation of a criterion for porous cohesive-frictional materials by Maghous et al. In order to demonstrate the advantages of the generalized formulation, bone is chosen as an example material, since it features yield envelopes with different shapes depending on the considered length scale. A fabric- and density-based quadric criterion for the description of homogenized material behavior of trabecular bone is identified from uniaxial, multiaxial and torsional experimental data. Also, a fabric- and density-based Tsai–Wu yield criterion for homogenized trabecular bone from in silico data is converted to an equivalent quadric criterion by introduction of a transformation of the interaction parameters. Finally, a quadric yield criterion for lamellar bone at the microscale is identified from a nanoindentation study reported in the literature, thus demonstrating the applicability of the generalized formulation to the description of the yield envelope of bone at multiple length scales.
Resumo:
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
Interior ice elevations of the West Antarctic Ice Sheet (WAIS) during the last glaciation, which can serve as benchmarks for ice-sheet models, are largely unconstrained. Here we report past ice elevation data from the Ohio Range, located near the WAIS divide and the onset region of the Mercer Ice Stream. Cosmogenic exposure ages of glacial erratics that record a WAIS highstand similar to 125 m above the present surface date to similar to 11.5 ka. The deglacial chronology prohibits an interior WAIS contribution to meltwater pulse 1A. Our observational data of ice elevation changes compare well with predictions of a thermomechanical ice-sheet model that incorporates very low basal shear stress downstream of the present day grounding line. We conclude that ice streams in the Ross Sea Embayment had thin, low-slope profiles during the last glaciation and interior WAIS ice elevations during this period were several hundred meters lower than previous reconstructions.
Resumo:
In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.