989 resultados para Programming, Linear, utilization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A teoria de jogos modela estratégias entre agentes (jogadores), os quais possuem recompensas ao fim do jogo conforme suas ações. O melhor par de estratégias para os jogadores constitui uma solução de equilíbrio. Porém, nem sempre se consegue estimar os dados do problema. Diante disso, os parâmetros incertos presentes em modelos de jogos são formalizados pela teoria fuzzy. Assim, a teoria fuzzy auxilia a teoria de jogos, formando jogos fuzzy. Dessa forma, parâmetros, como as recompensas, tornam-se números fuzzy. Mais ainda, quando há incerteza na representação desses números fuzzy utilizam-se os números fuzzy intervalares. Então, neste trabalho modelos de jogos fuzzy intervalares são analisados e métodos computacionais são desenvolvidos para a resolução desses jogos. Por fim, realizam-se simulações de programação linear para observar melhor a aplicação das teorias estudadas e avaliar a proposta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Declarative techniques such as Constraint Programming can be very effective in modeling and assisting management decisions. We present a method for managing university classrooms which extends the previous design of a Constraint-Informed Information System to generate the timetables while dealing with spatial resource optimization issues. We seek to maximize space utilization along two dimensions: classroom use and occupancy rates. While we want to maximize the room use rate, we still need to satisfy the soft constraints which model students’ and lecturers’ preferences. We present a constraint logic programming-based local search method which relies on an evaluation function that combines room utilization and timetable soft preferences. Based on this, we developed a tool which we applied to the improvement of classroom allocation in a University. Comparing the results to the current timetables obtained without optimizing space utilization, the initial versions of our tool manages to reach a 30% improvement in space utilization, while preserving the quality of the timetable, both for students and lecturers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing concentration of CO2 in the atmosphere and its harmful consequences has led the scientific community to direct its efforts towards sustainable processes. Among the possible approaches, the use of CO2 and alternative solvents are two strategies that are having widespread diffusion. In this work the reuse of CO2 is expressed by using it as a reaction reagent and as trigger to change the physical properties of a catalyst thus facilitating its recovery. As regards the CO2 use as reagent, two catalytic systems have been developed for the conversion of CO2 and epoxides into cyclic carbonates, used in the synthesis of polymers and as aprotic solvents. Homogeneous catalysts made by choline-based eutectic mixtures and heterogeneous catalysts made from biopolymers and waste pyrolysis have been synthesized and tested on this reaction. The carbonate interchange reaction (CIR) of a diol with a linear carbonate (as dimethyl carbonate) is an interesting alternative, for the synthesis of cyclic carbonates; as the second application of CO2 as polarity trigger, it was used for catalyst recovery. In fact DBU, here used as catalyst, is part of the so called “switchable solvents”: they can pass from a less-polar to a more-polar form (and from being soluble to non-soluble in the reaction mixture) when reacting with CO2 in presence of water or alcohols. Also in this case, heterogeneous catalysts made from biopolymers and waste pyrolysis have been synthesized and tested on CIR. As for the use of alternative solvents, this work focuses on the use of Deep Eutectic Solvents (DESs). They are a new generation of solvents composed by a mixture of two or more substances, liquid at room temperature, and non-volatile. New and biobased DESs were here used: i) as reaction media to carry out chemoenzymatic epoxidation; ii) in the extraction of astaxanthin from microalgae culture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of ancient, undeciphered scripts presents unique challenges, that depend both on the nature of the problem and on the peculiarities of each writing system. In this thesis, I present two computational approaches that are tailored to two different tasks and writing systems. The first of these methods is aimed at the decipherment of the Linear A afraction signs, in order to discover their numerical values. This is achieved with a combination of constraint programming, ad-hoc metrics and paleographic considerations. The second main contribution of this thesis regards the creation of an unsupervised deep learning model which uses drawings of signs from ancient writing system to learn to distinguish different graphemes in the vector space. This system, which is based on techniques used in the field of computer vision, is adapted to the study of ancient writing systems by incorporating information about sequences in the model, mirroring what is often done in natural language processing. In order to develop this model, the Cypriot Greek Syllabary is used as a target, since this is a deciphered writing system. Finally, this unsupervised model is adapted to the undeciphered Cypro-Minoan and it is used to answer open questions about this script. In particular, by reconstructing multiple allographs that are not agreed upon by paleographers, it supports the idea that Cypro-Minoan is a single script and not a collection of three script like it was proposed in the literature. These results on two different tasks shows that computational methods can be applied to undeciphered scripts, despite the relatively low amount of available data, paving the way for further advancement in paleography using these methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A-- water bath at 74ºC for 9 h; B-- water bath at 74ºC for 8 h and temperature increased to 100ºC for 1 h; C-- water bath at 74ºC for 2 h and temperature increased to 100ºC for 1 h;; and D-- water bath at 120ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 sec was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A) water bath at 74 ºC for 9 h; B) water bath at 74 ºC for 8 h and temperature increased to 100 ºC for 1 h; C) water bath at 74 ºC for 2 h and temperature increased to 100 ºC for 1 h; and D) water bath at 120 ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37 ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 s was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Waterlogging of soils is common in nature. The low availability of oxygen under these conditions leads to hypoxia of the root system impairing the development and productivity of the plant. The presence of nitrate under flooding conditions is regarded as being beneficial towards tolerance to this stress. However, it is not known how nodulated soybean plants, cultivated in the absence of nitrate and therefore not metabolically adapted to this compound, would respond to nitrate under root hypoxia in comparison with non-nodulated plants grown on nitrate. A study was conducted with (15)N labelled nitrate supplied on waterlogging for a period of 48 h using both nodulated and non-nodulated plants of different physiological ages. Enrichment of N was found in roots and leaves with incorporation of the isotope in amino acids, although to a much smaller degree under hypoxia than normoxia. This demonstrates that nitrate is taken up under hypoxic conditions and assimilated into amino acids, although to a much lesser extent than for normoxia. The similar response obtained with nodulated and non-nodulated plants indicates the rapid metabolic adaptation of nodulated plants to the presence of nitrate under hypoxia. Enrichment of N in nodules was very much weaker with a distinct enrichment pattern of amino acids (especially asparagine) suggesting that labelling arose from a tissue source external to the nodule rather than through assimilation in the nodule itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examined the influence of three polymerization cycles (1: heat cure - long cycle; 2: heat cure - short cycle; and 3: microwave activation) on the linear dimensions of three denture base resins, immediately after deflasking, and 30 days after storage in distilled water at 37± 2ºC. The acrylic resins used were: Clássico, Lucitone 550 and Acron MC. The first two resins were submitted to all three polymerization cycles, and the Acron MC resin was cured by microwave activation only. The samples had three marks, and dimensions of 65 mm in length, 10 mm in width and 3 mm in thickness. Twenty-one test specimens were fabricated for each combination of resin and cure cycle, and they were submitted to three linear dimensional evaluations for two positions (A and B). The changes were evaluated using a microscope. The results indicated that all acrylic resins, regardless of the cure cycle, showed increased linear dimension after 30 days of storage in water. The composition of the acrylic resin affected the results more than the cure cycles, and the conventional acrylic resin (Lucitone 550 and Clássico) cured by microwave activation presented similar results when compared with the resin specific for microwave activation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. METHODS: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the t test for unpaired comparisons between groups. The level of statistical significance was 5%. RESULTS: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. CONCLUSION: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física